A következő címkéjű bejegyzések mutatása: OpenVPN. Összes bejegyzés megjelenítése
A következő címkéjű bejegyzések mutatása: OpenVPN. Összes bejegyzés megjelenítése

2019. július 24., szerda

Troubleshooting Wireguard VPN on Windows 10, Android and Linux

I have had my share of pain over the compexity / slowness / incompatibilities / vulnerabilities of using Cisco,
To me it seems the primary problem with Wireguard are twofold:
  1. Not having enough experience in the community (blog posts, walk-throughs, how-tos etc.) to set up all kinds of arrangements besides the usual site-to-site and cloud VPN jump-host.
  2. Clients offer less the adequate error messages that could help with debugging / troubleshooting.
  3. Clients across platforms are not consistent.
I am writing this for two reasons:
  • helping fellow users with similar situations
  • and to give feedback to the developers (will try to figure there to submit reports and which of the issues are known already).
Things to fix / disambiguate / document in the various WireGuard components:
  1. The Android client does not have the nice log viewer that is part of the Windows client - and that helped me to see what is (not) happening) 
  2. You can export the  log form the Android client is full of UI related Java messages, unlike the clean log of the Windows client - it really makes it very hard to comprehend what is going on.
  3. The Android client just disappears after a while (even with the PersistentKeepalive set to 25), so suddenly the VPN protection disappears without any notification. This did not happen to the OpenVPN Android client, so probably just have to tell Android not to evict / suspend the VPN software somehow.
  4. The error message "bad address" (Android client, creating configuration from scratch) is misleading or not informative enough: got it for example for 192.168.1.1/24 (should be /32 or 192.168.1.0/24) - could correct it automatically or at least be more informative telling you what is wrong.
  5. It is hard to figure where Wireguard is logging on linux with systemd.
    Is it logging at all?
    - Could not find any trace of the failed connection attempts, so it was really hard to tell, if my DNS, my port forwarding or my Wireguard config is wrong (was the latter).
    - Could not find messages about 192.168.1.2/24 being inaccessible (overridden) if there is a 192.168.1.3/24 peer afterwards, so have to use /32 peers even if the server interface address is communicating on a 192.168.1.1/24 address with both of the clients.
    - systemd startup log did not habve any relevant messages either
What my mistakes and symptoms were:
  1. Accidentally mixed up a private and a public key. Wiregoard just silently fails, does not tell you that there was a connection attempt but the key was wrong. Could have been any network related inaccessibility as well...
  2. Did not know how to configure the peer addresses each for /32 so that they don't interfere but both can communicate with the /24 server interface.

2015. május 3., vasárnap

Five times faster TCP speed on high latency high bandwith connections

All right, it took me quite some time to figure this out so I will just give you some background to see if you are in the same situation, then the solution.

My home connection in Singapore is a 100/10 Mbit/sec cable line, speedtest.net measures an average 10ms ping, 103.8Mbit/s (12 682 KiB/s) download and 5.9 Mbit/s (714 KiB/s) upload speed, which is kind of a decent connection (there is 500Mbps fiber available to those who really need it. :)

Ther server and network I would like to access is in Hungary, on a 120/20 Mbps cable, so obviously that 20Mbit/sec uplink should be limiting my download speeds at around 2441 KiB/s=20*1000*1000/1024/8.

I did test the intercontinental connection speed with speedtest.net:
  •  Magyar Telecom, Vodafone and Telenor servers
  • SG daytime and SG late evening HU daytime as well
The daunting average numbers (average of 3 or 5 for each server-daytime combination): 411ms ping (RTT), 1114 KiB/s download, 168 KiB/s upload. Also there is a huge variance even in the short term, and a large drop when both time zones are in daytime.

I have set up OpenVPN but could not get anything more than 175 KiB/s (neither using SMB nor pure FTP transfer).

Next tried SCP and SFTP without VPN: the best I could get was a much favorable 700 KiB/s. (Not CPU bound, even the far side Linux on and Atom CPU was only used 10%) That however is a protocol most movie players will not stream over. :)

Then I figure I will need to go rouge (I mean raw) and have set up iperf (and the necessary port forwarding on both sides). On the first run with default parameters I got back to results around 175 KiB/s!

After tuning TOS, increasing send buffer and TCP window size I was able to get it up to nearly 512 KiB/s.Clearly, something was still limiting it.
Windows accepted any window size I threw at it, but Linux maxed out at 320KiB for some reason. ("Should be enough for everyone!" some Linux believers might scream.)

But in fact calculating the Bandwidth Delay Product (see iperf page) for the six intercontinental per server averages I get, e.g. to Vodafone server in the evening 1235 KiB/s * 0.516sec (RTT) = 637 KiB of data can be in flight!

Then I just had to look up these network tuning parameters:
net.core.netdev_max_backlog = 5000
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

Which you should put into /etc/sysctl.conf, then run 'sudo sysctl -p', and set TCP window size to 640KB (that should be enough for everyone!), voila:
[  3] 10.0-20.0 sec  9.75 MBytes   998 KBytes/sec
[  3] 20.0-30.0 sec  9.75 MBytes   998 KBytes/sec
[  3] 30.0-40.0 sec  9.62 MBytes   986 KBytes/sec
[  3] 40.0-50.0 sec  9.44 MBytes   966 KBytes/sec
[  3] 50.0-60.0 sec  9.88 MBytes  1011 KBytes/sec
[  3] 60.0-70.0 sec  9.56 MBytes   979 KBytes/sec
[  3] 70.0-80.0 sec  9.38 MBytes   960 KBytes/sec
[  3] 80.0-90.0 sec  9.38 MBytes   960 KBytes/sec

I admit I did not do a lot of further testing and tuning, since I got pretty close to the average value that was achievable according to the intercontinental speedtest measurements. :)

ToDo: 8 and 16 parallel TCP streams can rob more bandwidth and perform at a total of 1.2-1.3 MiB/s, so there is some more space for tuning.
But that may also just be fooling the rate limiting algorithms for a bit longer in the several routers between the endpoints, also streaming a single movie over multiple TCP streams is not feasible, so I think I am pretty much done for now. :)

EDIT: Another article about Windows 7 TCP tuning suggests:
netsh int tcp set global congestionprovider=ctcp
For better broadband utilization. :)

Rendszeres olvasók