I am trying to copy a 4GB file from a fly machine to my local machine. I have connected to wireguard in the same region (ewr) I am based in the UK.
I used fly ssh issue –agent and im using scp to copy the file from the app.internal address to my local machne. Im only getting 23KB per second and its going to take 35 hours to copy the file.
Any ideas why its so slow? or anything I can do to speed it up?
that’s odd can you try connecting to wireguard in lhr and using scp from there? the fly private network is global, so all machines are accessible from your local gateway.
Hi Lillian I tried it from there first, i read someone else who had a problem because they had the wireguard in a different region, thats why I swapped.
Hello @deano-fury, could you share a mtr or traceroute report to the Wireguard gateways (server IPs in generated Wireguard configuration) in ewr and lhr? We could not see any connectivity issue between our gateways in lhr and ewr, so we think something is probably happening on the path from your ISP to us.
It is also possible that your ISP has some kind of QoS against Wireguard traffic – because it is a UDP-based VPN protocol and does not look like any other commonly-used protocol. If that is the case, then you’ll need to either talk to your ISP or work around it by serving the file you need to download somehow with an alternative method, such as an authenticated HTTPS endpoint instead of over a Wireguard connection.
I use tailscale as well, I believe that uses wireguard and I get good throughput with that.
here is tracepath
tracepath lhr1.gateway.6pn.dev
1?: [LOCALHOST] pmtu 1500
1: _gateway 4.090ms
1: _gateway 51.168ms
2: _gateway 42.147ms pmtu 1400
2: 81.45-31-62.static.virginmediabusiness.co.uk 58.127ms
3: no reply
4: perr-core-2a-ae11-0.network.virginmedia.net 25.663ms
5: no reply
6: 86.85-254-62.static.virginmediabusiness.co.uk 67.427ms asymm 8
7: no reply
8: be2348.ccr41.lon13.atlas.cogentco.com 30.827ms asymm 13
9: be2178.rcr51.lon17.atlas.cogentco.com 33.693ms asymm 15
10: no reply
11: no reply
12: no reply
13: no reply
14: no reply
15: no reply
16: no reply
17: no reply
18: no reply
19: no reply
20: no reply
21: no reply
22: no reply
23: no reply
24: no reply
25: no reply
26: no reply
27: no reply
28: no reply
29: no reply
30: no reply
Too many hops: pmtu 1400
Resume: pmtu 1400
mtr wasn’t as successful
framework-13 (192.168.100.133) -> lhr1.gateway.6pn.dev2025-10-07T22:04:25+0100
Keys: Help Display mode Restart statistics Order of fields quit
Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
1. _gateway 0.0% 19 2.5 4.0 2.4 7.6 1.8
2. 81.45-31-62.static.virginmediab 94.4% 19 3.7 3.7 3.7 3.7 0.0
3. (waiting for reply)
Is there a way I can install tailscale and get it to get ssh to listen on that address to?
I just tried running caddy and making it work over tailscale and got the same speeds, maybe it is the wireguard, Is there any issues with the host itself? the fly machine id is 286555df750e68 is that something you could check?
I just tried to SCP a file from another box running tailscale in the us and it got 1.8MB/s
I get the same behaviour when tethered to my mobile phone. starts at about 100KB/s then drops to 50KB/s trying to copy via scp over wireguard, its not quicker when I use the fly ssh sftp get either.
I tried testing the speed between our lhr WG gateways and the host running your machine – got around 300MB/s over our internal WG mesh.
Could you try exposing an HTTPS endpoint on a fly public IP with basic auth (so that it isn’t actually exposed) and test download speeds that way to rule out possible QoS issues?
Hi Peter, I just raised a support request as you posted this. I installed rclone on the machine and tried to put the file to wasabi s3 instead. I bet bursts of 20Mbps and then it drops to 2Mbps according to the grafana dashboard.
What bandwdth would you expect? could it be cpu related rather than networking?
Using caddy and exposing it on a fixed ip i managed to download the file at about 10MB/s so much better. It would be good to try and work out why the wireguard is so slow and the rclone s3 upload