A következő címkéjű bejegyzések mutatása: bandwidth. Összes bejegyzés megjelenítése
A következő címkéjű bejegyzések mutatása: bandwidth. Összes bejegyzés megjelenítése

2015. szeptember 27., vasárnap

Process iperf output for high latency high bandwidth broadband

Well after my last post I got the time to analyze, why WinSCP and the SFTP protocol in general cannot get a single TCP connection up to the maximum available bandwidth but usually max out at around 400Kbytes/sec, while 4-8 transfer do use up all the remote uplink (~10Mbit/sec cable uplink = ~1.25Mbytes/sec or ~1220 Kilobytes/sec).
I created (copy&paste) a long script to test several send buffer sizes and TCP windows. Iperf gives you output like this:
------------------------------------------------------------
Client connecting to 222.165.17.7, TCP port 443
TCP window size:  512 KByte (WARNING: requested  256 KByte)
------------------------------------------------------------
[  3] local 192.168.121.2 port 36327 connected with 222.165.17.7 port 443
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  6.06 MBytes   621 KBytes/sec
[  3] 10.0-20.0 sec  8.19 MBytes   838 KBytes/sec
[  3] 20.0-30.0 sec  8.31 MBytes   851 KBytes/sec
[  3] 30.0-40.0 sec  8.38 MBytes   858 KBytes/sec
[  3] 40.0-50.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 50.0-60.0 sec  8.00 MBytes   819 KBytes/sec
[  3] 60.0-70.0 sec  7.62 MBytes   781 KBytes/sec
[  3] 70.0-80.0 sec  8.31 MBytes   851 KBytes/sec
[  3] 80.0-90.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 90.0-100.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 100.0-110.0 sec  8.19 MBytes   838 KBytes/sec
[  3] 110.0-120.0 sec  8.00 MBytes   819 KBytes/sec
[  3]  0.0-120.2 sec  95.5 MBytes   814 KBytes/sec
[  3] MSS size 1248 bytes (MTU 1288 bytes, unknown interface)


 I applied this script to the output of iperf to create a CSV only using bash scripting:
echo -ne "\xEF\xBB\xBF"
echo "Send buffer;MTU;MSS;TCP window;Speed"

while read TOPROCESS
do
    case "$TOPROCESS" in
        *buffer* )
            TOPROCESS="${TOPROCESS/K send buffer*/}"
            SENDBUFFER="${TOPROCESS/* /}"
            ;;
        TCP* )
            TOPROCESS="${TOPROCESS/ [KM]Byte (WARNING*/}"
            TOPROCESS="${TOPROCESS/TCP window size: /}"
            TOPROCESS="${TOPROCESS/ /}"
            case "$TOPROCESS" in
                1.00 )
                    WINSIZE="1024" ;;
                1.12 )
                    WINSIZE="1152" ;;
                1.25 )
                    WINSIZE="1280" ;;
                * )
                    WINSIZE="$TOPROCESS"
            esac ;;
        *MSS* )
            MSS=${TOPROCESS/* size /}
            MSS=${MSS/ bytes \(*}
            MTU=${TOPROCESS/*MTU /}
            MTU=${MTU/ bytes,*}
            ;;
        *Bytes/sec )
            SPEED=${TOPROCESS/ [KM]Bytes\/sec*/}
            SPEED=${SPEED/ Bytes\/sec*/}
            SPEED=${SPEED/*Bytes/}
            SPEED=${SPEED##* }
            case "$SPEED" in
                1.*    )
                    SPEED=${SPEED#1\.}
                    SPEED=${SPEED#0}
                    SPEED=$((SPEED*1024))
                    SPEED=${SPEED%??}
                    SPEED=$((SPEED+1024))
                ;;
                0.00 )
                    SPEED="0"
                ;;
            esac
            echo "$SENDBUFFER;$MTU;$MSS;$WINSIZE;$SPEED"
            ;;
        *0.0-10.0*Bytes/sec | Client\ connecting* | *connected\ with* | --------------* | *Interval*Transfer*Bandwidth* )
            # echo "Dropping connection ramp-up measurement"
            # echo "Dropping connecting/connected lines"
            # echo "Dropping separator lines"
            # echo "Dropping header lines"
            ;;
        * )
            echo "$TOPROCESS"
    esac
done < bandwidth-measurements.txt


This outputs a standard UTF-8 CSV for Excel, but the MSS and MTU readings are unfortunately always from the previous measurement. I did not bother fixing this, since it was the same for me along the whole measurement. :)

The result did highlight a few things:
  • Probably there have been some intermittent errors here and there
  • the send buffer size does not seem to change a lot, there are good results with 64 and 512 as well
  • TCP windows below 768kbyte/s are useless, however I did not try windows large enough to see the speed decline
  • probably I should rerun the tests with less buffer sizes and 768-5k window sizes
  • or just allow the sending linux box to scale it's TCP window very high :)

2015. május 3., vasárnap

Five times faster TCP speed on high latency high bandwith connections

All right, it took me quite some time to figure this out so I will just give you some background to see if you are in the same situation, then the solution.

My home connection in Singapore is a 100/10 Mbit/sec cable line, speedtest.net measures an average 10ms ping, 103.8Mbit/s (12 682 KiB/s) download and 5.9 Mbit/s (714 KiB/s) upload speed, which is kind of a decent connection (there is 500Mbps fiber available to those who really need it. :)

Ther server and network I would like to access is in Hungary, on a 120/20 Mbps cable, so obviously that 20Mbit/sec uplink should be limiting my download speeds at around 2441 KiB/s=20*1000*1000/1024/8.

I did test the intercontinental connection speed with speedtest.net:
  •  Magyar Telecom, Vodafone and Telenor servers
  • SG daytime and SG late evening HU daytime as well
The daunting average numbers (average of 3 or 5 for each server-daytime combination): 411ms ping (RTT), 1114 KiB/s download, 168 KiB/s upload. Also there is a huge variance even in the short term, and a large drop when both time zones are in daytime.

I have set up OpenVPN but could not get anything more than 175 KiB/s (neither using SMB nor pure FTP transfer).

Next tried SCP and SFTP without VPN: the best I could get was a much favorable 700 KiB/s. (Not CPU bound, even the far side Linux on and Atom CPU was only used 10%) That however is a protocol most movie players will not stream over. :)

Then I figure I will need to go rouge (I mean raw) and have set up iperf (and the necessary port forwarding on both sides). On the first run with default parameters I got back to results around 175 KiB/s!

After tuning TOS, increasing send buffer and TCP window size I was able to get it up to nearly 512 KiB/s.Clearly, something was still limiting it.
Windows accepted any window size I threw at it, but Linux maxed out at 320KiB for some reason. ("Should be enough for everyone!" some Linux believers might scream.)

But in fact calculating the Bandwidth Delay Product (see iperf page) for the six intercontinental per server averages I get, e.g. to Vodafone server in the evening 1235 KiB/s * 0.516sec (RTT) = 637 KiB of data can be in flight!

Then I just had to look up these network tuning parameters:
net.core.netdev_max_backlog = 5000
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

Which you should put into /etc/sysctl.conf, then run 'sudo sysctl -p', and set TCP window size to 640KB (that should be enough for everyone!), voila:
[  3] 10.0-20.0 sec  9.75 MBytes   998 KBytes/sec
[  3] 20.0-30.0 sec  9.75 MBytes   998 KBytes/sec
[  3] 30.0-40.0 sec  9.62 MBytes   986 KBytes/sec
[  3] 40.0-50.0 sec  9.44 MBytes   966 KBytes/sec
[  3] 50.0-60.0 sec  9.88 MBytes  1011 KBytes/sec
[  3] 60.0-70.0 sec  9.56 MBytes   979 KBytes/sec
[  3] 70.0-80.0 sec  9.38 MBytes   960 KBytes/sec
[  3] 80.0-90.0 sec  9.38 MBytes   960 KBytes/sec

I admit I did not do a lot of further testing and tuning, since I got pretty close to the average value that was achievable according to the intercontinental speedtest measurements. :)

ToDo: 8 and 16 parallel TCP streams can rob more bandwidth and perform at a total of 1.2-1.3 MiB/s, so there is some more space for tuning.
But that may also just be fooling the rate limiting algorithms for a bit longer in the several routers between the endpoints, also streaming a single movie over multiple TCP streams is not feasible, so I think I am pretty much done for now. :)

EDIT: Another article about Windows 7 TCP tuning suggests:
netsh int tcp set global congestionprovider=ctcp
For better broadband utilization. :)

Rendszeres olvasók