This article contains values as an example of a specific case. Your servers may have different configuration. Please use accordingly.
Over a network, although the network bandwidth could be 10 GigaBits, yet, if the servers have
1 GigE Ethernet network cards (1 GigaBit), the overall file-transfer
throughput is 1 GigaBits / sec, which means (1024*1024*1024 / 8) bytes
per sec = 125 MB/sec. This is the maximum throughput available on the network.
For convenience, the 2 esrvers we need to improve data transfer speeds between - we'd call one as source, & other target. Now, if we ping from a source to target server, we get about 42 ms of turnaround time.
This gives us maximum amount of data at any point of time possible on this network as (125 MB / s) * (0.042) = 5.250 MB . However, the default network level settings on the servers don’t let this much bandwith to be used. We may verify it with “ndd & route” commands (only SA can run these commands):
If you have root access to any locale Linux box, you may try "ndd get ..." & "route print" commands, otherwise ask the SA to provide you the outputs, to find out the current settings.
The **recv** & **xmit** (receive & transmit) values are the default ones, used by the server & maximum values are defined by **cwnd** & **max_buf. e.g. The 49152 value indicates that this server (or any RMAN job) would send / receive, by default, 48 KB of traffic only. Moreover, if someone is manually writing a program, may use the maximum of 1048576 bytes
= 1 MB, by using the **max** variables)
Now, our purpose is to be able to use the whole 5.25 MB that can be used on the current
connection. However, the allowed values for these 4 variables, should be a power of 2. Hence, we would pick the next higher power of 2, which is 8 MB (=2 ^ 3). We would need to set the **max** limit variables also equal to 8 MB.
DBA should ask SA to run these commands for setting the values, with “set” option, instead of “get”. E.g. “sudo ndd -set /dev/tcp tcp_recv_hiwat 8388608”
The “ndd” commands get / set values for ALL connections. We may not want to do that everytime,
because if a non Netbackup / media server (say, a windows client) connects to this server, which has a lower capacity of being able to handle traffic (say, 1 MB), then the 5 MB packets would overwhelm the connection. Hence, to avoid this scenario, we should use “route” commands to set connections settings between specific IP’s , which don’t work the same as “ndd”.
It’s always safer to use “route” settings instead of
“ndd” except “tcp_max_buf”. Here are some screenshots (please esarch internet to see more detailed syntax of route / ndd commands) :
The “sendpipe” option is same as “tcp_xmit_hiwat” of ndd & recvpipe is “tcp_recv_hiwat”.
The “ssthresh” is same as “tcp_cwnd_max”. However, the “tcp_max_buf” still needs to be set through “ndd”.
With "route" settings, all these settings will be wiped out & will go back to default whenever a server is rebooted, so changes are temporary till next boot. With "ndd', it may be made permanent, which may be unnecessary, unless it's between some DB & RMAN backup server (viz. Netbackup) where there's a regular transfer of data.
"ndd" uses get / set options to show or edit values, whereas "route" uses print / add / delete options.
Now, having understood all the above concepts, when a DBA / SA is ready to start working on their specific environment of servers, you may follow this :
1) 1st we need to get the current values from the SA, of the 4 parameters (to be executed from source / target server),
a. ndd –get /dev/tcp tcp_recv_hiwat
b. ndd –get /dev/tcp tcp_xmit_hiwat
c. ndd –get /dev/tcp tcp_max_buf
d. ndd –get /dev/tcp tcp_cwnd_max
e. route get
2) For “route get” output, these parameters are named sendpipe, recvpipe, ssthresh (there’s no equivalent of tcp_ max_buf here – it can be set only through ndd command). “ndd” sets the parameters universally, for all IP’s, whereas “route” can be used to set between specific IP’s & temporarily.
3) Along with this we need to know the server Ethernet card types (on both source & target servers) , viz. 1 GigE, 10GigE ; AND the network bandwidth , viz. a 10 Gbps line, 24 Mbps line etc.
4) Then we need to make calculations based on the above network bandwidth values, taking into the consideration, an example calculation on my Blog - http://databaseoracle.blogspot.com/2009/06/how-to-speed-up-data-transfer-between.html .
5) This way, we’d arrive at a number, which is the total bandwidth that CAN BE available between the 2 servers under consideration. We’ll call this number (if it is a power of 2) or the next power of 2, MAX-BANDWIDTH
6) Then we request the SA to set the above parameters to new values (use route command, instead of ndd , to set these values temporarily) :
a. route add –host … -sendpipe MAX-BANDWIDTH -recvpipe MAX-BANDWIDTH -ssthresh MAX-BANDWIDTH
b. ndd –set /dev/tcp tcp_cwnd_max MAX-BANDWIDTH (this parameter is available only through ndd & SA may decide whether to set it or not, based on implications) .
7) If the SA is responding in minutes, this whole calculation would take about 10-20 minutes.
8) Please let me know in case of any doubts & your experience when you try these modifications.
9) Happy transferring lightning-fast data !
Over a network, although the network bandwidth could be 10 GigaBits, yet, if the servers have
1 GigE Ethernet network cards (1 GigaBit), the overall file-transfer
throughput is 1 GigaBits / sec, which means (1024*1024*1024 / 8) bytes
per sec = 125 MB/sec. This is the maximum throughput available on the network.
= 1 MB, by using the **max** variables)
connection. However, the allowed values for these 4 variables, should be a power of 2. Hence, we would pick the next higher power of 2, which is 8 MB (=2 ^ 3). We would need to set the **max** limit variables also equal to 8 MB.
because if a non Netbackup / media server (say, a windows client) connects to this server, which has a lower capacity of being able to handle traffic (say, 1 MB), then the 5 MB packets would overwhelm the connection. Hence, to avoid this scenario, we should use “route” commands to set connections settings between specific IP’s , which don’t work the same as “ndd”.
“ndd” except “tcp_max_buf”. Here are some screenshots (please esarch internet to see more detailed syntax of route / ndd commands) :
The “ssthresh” is same as “tcp_cwnd_max”. However, the “tcp_max_buf” still needs to be set through “ndd”.
Now, having understood all the above concepts, when a DBA / SA is ready to start working on their specific environment of servers, you may follow this :
1) 1st we need to get the current values from the SA, of the 4 parameters (to be executed from source / target server),
a. ndd –get /dev/tcp tcp_recv_hiwat
b. ndd –get /dev/tcp tcp_xmit_hiwat
c. ndd –get /dev/tcp tcp_max_buf
d. ndd –get /dev/tcp tcp_cwnd_max
e. route get
2) For “route get” output, these parameters are named sendpipe, recvpipe, ssthresh (there’s no equivalent of tcp_
3) Along with this we need to know the server Ethernet card types (on both source & target servers) , viz. 1 GigE, 10GigE ; AND the network bandwidth , viz. a 10 Gbps line, 24 Mbps line etc.
4) Then we need to make calculations based on the above network bandwidth values, taking into the consideration, an example calculation on my Blog - http://databaseoracle.blogspot.com/2009/06/how-to-speed-up-data-transfer-between.html .
5) This way, we’d arrive at a number, which is the total bandwidth that CAN BE available between the 2 servers under consideration. We’ll call this number (if it is a power of 2) or the next power of 2, MAX-BANDWIDTH
6) Then we request the SA to set the above parameters to new values (use route command, instead of ndd , to set these values temporarily) :
a. route add –host
b. ndd –set /dev/tcp tcp_cwnd_max MAX-BANDWIDTH (this parameter is available only through ndd & SA may decide whether to set it or not, based on implications) .
7) If the SA is responding in minutes, this whole calculation would take about 10-20 minutes.
8) Please let me know in case of any doubts & your experience when you try these modifications.
9) Happy transferring lightning-fast data !
I surely must not miss reading more of your articles. I haven't have such a good read in a long time. Very good job.
ReplyDeleteThanks 20 ! Please let me know through the comment section, if you tried the network data transfer speeding-up ? How were your results ?
ReplyDelete