I found an old post saying UDP support is coming in Feb, do we have full UDP support now?

I am testing some UDP application but it seems still not fully working right now.


We have not shipped IPv6 UDP yet, it is more difficult than we expected and seems less important in general.

Is IPv4 bad for your purposes?

1 Like

Some clients are preferring V6-only, I will stay with V4 UDP for now.

Thanks for the update.

Hi: Any timelines for IPv6 UDP?

(I haven’t checked if it works, but asking just in case).

It doesn’t, right now. It’s on our agenda but it’s a couple features down the list.


The usecase I have for IPv6 UDP (not solvable with IPv4) is transparently proxying TCP/UDP connections.

For ex, our DNS resolver always answers with our app’s fly IPs for all DNS requests from our clients. That is, say example.com from client-A gets 2a09:8080:1::abcd, while example.com from client-B gets 2a09:8080:1::1234 (assuming both of those IPv6 are assigned to our fly app).

Then, when a UDP/TCP request from client-A hits 2a09:8280:1::abcd, the fly app (reading from the mapping set by the DNS server) proxies those packets to example.com. Think of it like a fragile NAT-PT (which in itself is fragile).

This is only possible with IPv6 because of its abundance. Say, I vend 100K IPv6 addresses (assuming fly considers it fair-use), I can assign one IPv6 address per client-IP per DNS query and still not exhaust my supply, while persisting a route mapping for (client-IP, proxy-IP, dest-IP) tuple for 10 days to a month.

1 Like

My ISP’s IPv4 is about 2x as slow as IPv6 using just ping. So using IPv6 for everything possible is a no-brainer.
I’d love if fly could support IPv6 + UDP. For my purposes It’s not required to get the source IP and port so I would be fine with a proxied source IP.


I’d love to see IPv6 UDP support too. Especially for HTTP/3 over IPv6 and DNS over IPv6.


Serious question: is there an app you’d ship in April if we had v6 UDP Anycast that you can’t ship today?

(We’re definitely doing v6 UDP Anycast! The issue is where to prioritize it. It’s not a hard feature to write, but it is a super irritating and SRE-intensive feature to roll out.)


There is no mission-critical use case for us at the moment, but we had to remove the IPv6 addresses from our domains’ name servers and still have to move customers that are asking for full IPv6 support to IPs on another hosting provider. HTTP/3 is not working at all unfortunately, but that’s also not critical, as browsers fallback to HTTP/2 or 1.1.

So for us, this would help moving everything to fly.io, but is not mission-critical at all.

1 Like

Does IPv6 UDP work direct to a VM via its FLY_PUBLIC_IP?

I’ve been experimenting with this and I think this thread sums up what I didn’t at first realise. UDP over IPv6 doesn’t work at all on VMs. I thought it was only inbound UDP over IPv6 but that’s not the case. My application requires a public UDP address that’s not load balanced. I can’t do that with IPv4, because there’s no static, public IP exposed. And IPv6 doesn’t work per this thread.

What’s your app trying to do? That’s a pretty specific set of constraints!

My app registers itself in a DHT, which requires that it also be listening on the same address. Since the only public address you can listen on in Fly.io is IPv6 (FLY_PUBLIC_IP), that means I need to use UDP over IPv6 to register.

This feature is very desirable for me, and a bit of a blocker without it. Are there any plans?

1 Like

Hi, While waiting for support of UDP on IPv6 to arrive, would it be possible to add an option to disable Fly’s DNS servers to return AAAA records to DNS requests for selected hosts in the *.fly.dev. domain?

This might provide us a temporary workaround for situations like @tomklein and others had mentioned above.

For @thomas @kurt, this is my “Use Case”: I have managed to get TFTP servers build and run on Fly, for ex. at tftpNN.fly.io. Distributed machines connect to them using their PXE network boot options in order to chainload customised iPXE images from Anycast TFTP servers. Those iPXE images in turn load and verify signed images of NomadOS from static HTTPS servers which may also be built and run by a multi-stage Dockerfile on Fly, for example.
The HTTPS servers authenticate & authorize the iPXE clients before handing out any signed images and custom configurations.

However, TFTP clients likely fail to connect to tftpNN.fly.io because they usually prefer the AAAA over the A records if they run on hosts that are connected to dual-stack IPv4/IPv6 networks! Thus the only way to get this to work is to manually force TFTP clients to use IPv4 only ;-(

While this is tedious during manual testing, this gets difficult with PXE boot implementations on random hardware (BIOS, network interfaces). Hiding AAAA answers from these TFTP clients selectively by manipulating (local) DNS resolvers, or trying to force IPv4 only in BIOS options, etc. are not really options with distributed swarms of embedded IoT devices, or a cattle herd of servers.

I have written the TFTP server in Go using the pin/tftp client & server library. Specifically for Fly, this implementation of TFTP uses port 69/UDP only because the servers run in single-port mode, e.g. they do not negotiate a random high port for the file transfer.

Any thoughts, ideas? Thanks.


Currently, my workaround is to add an A record only to the DNS of my custom domain, but no AAAA record, which points to the TFTP servers’ IPv4 address at tftpServerNN.fly.dev.
Then point all TFTP clients (PXE boot in my use case) to tftp.mycustom.domain. instead of tftpServerNN.fly.dev. (which is ephermeral anyway). Like this TFTP clients will connect to port 69/UDP by IPv4 only.

Domain validation by Let’s Encrypt, while it issues a certificate for my custom domain to Fly.io, is still possible without adding a AAAA record, but using the other option of adding a CNAME for _acme-challenge instead.

Like that, a HTTP/S servers may still share the same microVM deployment with the TFTP servers, and have both A and AAAA records in the custom domain for dual-stack Web access to ports 80 and 443/TCP by IPv4 and IPv6 (though HTTP3/QUIC will still fail/timeout while trying UDP over IPv6).