Hi team,
Since around 2026-04-28 21:27 UTC, all deploys of my Phoenix app on Fly have started failing because the BEAM can’t open new connections to my Supabase Postgres database. After investigating, the root cause is outbound IPv6 from gru is dropping 100% of packets to Supabase’s IPv6 endpoint, while existing established connections continue to work.
This was the canonical Fly + Supabase setup - generated by mix phx.gen.releasewith ECTO_IPV6=true and ERL_AFLAGS="-proto_dist inet6_tcp" - and it had been working reliably across many daily deploys.
Reproduction (from inside a running app machine)
# ping6 -c 10 -W 2 db.<myproject>.supabase.co
PING db.<myproject>.supabase.co (2600:1f1e:75b:4b14:1aef:2c9e:fcd6:8d12): 56 data bytes
--- db.<myproject>.supabase.co ping statistics ---
10 packets transmitted, 0 packets received, 100% packet loss
DNS resolution works correctly (returns 2600:1f1e:75b:4b14:1aef:2c9e:fcd6:8d12).
For comparison, IPv4 to Supabase’s pooler endpoint works fine from the same machine:
# nc -zv -w 5 aws-1-sa-east-1.pooler.supabase.com 5432
aws-1-sa-east-1.pooler.supabase.com (54.232.77.43:5432) open
Application-level symptom
In the BEAM, every Postgrex/Ecto connection attempt to the direct host hangs and gets dropped from the pool queue:
** (DBConnection.ConnectionError) [Visor.Repo] connection not available
and request was dropped from queue after 10980ms.
Or if I add socket_options: [:inet6], Erlang resolves AAAA correctly but the TCP connection times out (consistent with the ping6 packet loss).
Environment
-
App:
visor -
Org:
personal -
Region:
gru -
Two
appmachines, bothstarted -
Image:
visor:deployment-01KQB13CRCV2Y4GM3T8VJ3AA02 -
Direct host AAAA:
2600:1f1e:75b:4b14:1aef:2c9e:fcd6:8d12
What I’ve ruled out
-
DNS - resolves correctly via Erlang
:inet.gethostbyname/2and viagetent ahosts. -
Application code - no relevant changes; deploys were working until ~21:27 UTC.
-
Supabase side - their direct host AAAA is unchanged; an unrelated PostgREST incident is active but the Postgres component shows operational; the same IPv6 endpoint is reachable from my home network.
-
Connection pool exhaustion - Supabase reports 31/120 connections, no locks, no stuck migrations.
Workaround in place
Switched DATABASE_URL to the Supavisor session pooler (aws-1-sa-east-1.pooler.supabase.com:5432). App is back up.
Questions
-
Is there a known issue with
gru→ AWSsa-east-1IPv6 egress in the last 24-48h? -
Has anything changed in Fly’s outbound IPv6 routing recently?
-
Is there a
traceroute6/ mtr equivalent I can run from a machine to help debug, or can someone on your side check the path?
Happy to provide any logs, machine IDs, or run further tests. Thanks!