2015. szeptember 27., vasárnap

Process iperf output for high latency high bandwidth broadband

Well after my last post I got the time to analyze, why WinSCP and the SFTP protocol in general cannot get a single TCP connection up to the maximum available bandwidth but usually max out at around 400Kbytes/sec, while 4-8 transfer do use up all the remote uplink (~10Mbit/sec cable uplink = ~1.25Mbytes/sec or ~1220 Kilobytes/sec).
I created (copy&paste) a long script to test several send buffer sizes and TCP windows. Iperf gives you output like this:
------------------------------------------------------------
Client connecting to 222.165.17.7, TCP port 443
TCP window size:  512 KByte (WARNING: requested  256 KByte)
------------------------------------------------------------
[  3] local 192.168.121.2 port 36327 connected with 222.165.17.7 port 443
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  6.06 MBytes   621 KBytes/sec
[  3] 10.0-20.0 sec  8.19 MBytes   838 KBytes/sec
[  3] 20.0-30.0 sec  8.31 MBytes   851 KBytes/sec
[  3] 30.0-40.0 sec  8.38 MBytes   858 KBytes/sec
[  3] 40.0-50.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 50.0-60.0 sec  8.00 MBytes   819 KBytes/sec
[  3] 60.0-70.0 sec  7.62 MBytes   781 KBytes/sec
[  3] 70.0-80.0 sec  8.31 MBytes   851 KBytes/sec
[  3] 80.0-90.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 90.0-100.0 sec  8.12 MBytes   832 KBytes/sec
[  3] 100.0-110.0 sec  8.19 MBytes   838 KBytes/sec
[  3] 110.0-120.0 sec  8.00 MBytes   819 KBytes/sec
[  3]  0.0-120.2 sec  95.5 MBytes   814 KBytes/sec
[  3] MSS size 1248 bytes (MTU 1288 bytes, unknown interface)


 I applied this script to the output of iperf to create a CSV only using bash scripting:
echo -ne "\xEF\xBB\xBF"
echo "Send buffer;MTU;MSS;TCP window;Speed"

while read TOPROCESS
do
    case "$TOPROCESS" in
        *buffer* )
            TOPROCESS="${TOPROCESS/K send buffer*/}"
            SENDBUFFER="${TOPROCESS/* /}"
            ;;
        TCP* )
            TOPROCESS="${TOPROCESS/ [KM]Byte (WARNING*/}"
            TOPROCESS="${TOPROCESS/TCP window size: /}"
            TOPROCESS="${TOPROCESS/ /}"
            case "$TOPROCESS" in
                1.00 )
                    WINSIZE="1024" ;;
                1.12 )
                    WINSIZE="1152" ;;
                1.25 )
                    WINSIZE="1280" ;;
                * )
                    WINSIZE="$TOPROCESS"
            esac ;;
        *MSS* )
            MSS=${TOPROCESS/* size /}
            MSS=${MSS/ bytes \(*}
            MTU=${TOPROCESS/*MTU /}
            MTU=${MTU/ bytes,*}
            ;;
        *Bytes/sec )
            SPEED=${TOPROCESS/ [KM]Bytes\/sec*/}
            SPEED=${SPEED/ Bytes\/sec*/}
            SPEED=${SPEED/*Bytes/}
            SPEED=${SPEED##* }
            case "$SPEED" in
                1.*    )
                    SPEED=${SPEED#1\.}
                    SPEED=${SPEED#0}
                    SPEED=$((SPEED*1024))
                    SPEED=${SPEED%??}
                    SPEED=$((SPEED+1024))
                ;;
                0.00 )
                    SPEED="0"
                ;;
            esac
            echo "$SENDBUFFER;$MTU;$MSS;$WINSIZE;$SPEED"
            ;;
        *0.0-10.0*Bytes/sec | Client\ connecting* | *connected\ with* | --------------* | *Interval*Transfer*Bandwidth* )
            # echo "Dropping connection ramp-up measurement"
            # echo "Dropping connecting/connected lines"
            # echo "Dropping separator lines"
            # echo "Dropping header lines"
            ;;
        * )
            echo "$TOPROCESS"
    esac
done < bandwidth-measurements.txt


This outputs a standard UTF-8 CSV for Excel, but the MSS and MTU readings are unfortunately always from the previous measurement. I did not bother fixing this, since it was the same for me along the whole measurement. :)

The result did highlight a few things:
  • Probably there have been some intermittent errors here and there
  • the send buffer size does not seem to change a lot, there are good results with 64 and 512 as well
  • TCP windows below 768kbyte/s are useless, however I did not try windows large enough to see the speed decline
  • probably I should rerun the tests with less buffer sizes and 768-5k window sizes
  • or just allow the sending linux box to scale it's TCP window very high :)

Nincsenek megjegyzések:

Megjegyzés küldése

Rendszeres olvasók