Very Slow Network connections between EWR and us-east-1

Hi, I am running an application in EWR and connecting to postgres hosted by neon in us-east-1, when I try to connect to the pgbouncer and/ or directly to our database instance, I am seeing it take 10 minutes to negotiate the connection. I had this problem before in IAD and moved our app to EWR, but now the problem is back. It doesn’t matter what query I run, this connection is very slow in our db migration machines or on our API hosts serving web traffic. Has anyone one else seen issues like this before?

:waving_hand: Could you try a mtr or traceroute from inside one of your Fly machines to the IP of your database in us-east-1? If you could run a network speedtest without postgres, that would also help, but I understand that if the other side is just a managed database instance this might not be straightforward. Right now from our side it seems like things are OK in ewr, and we’re not seeing slow speeds to S3 in us-east-1 at least.

(Also: by “negotiate the connection” do you mean the TCP connection takes 10 minutes to establish, or something is very slow after TCP is established?)

sure, here is one mtr run:
mtr -r -c 20 ep-patient-sky-a49w7ukr-pooler.us-east-1.aws.neon.tech
Start: 2025-11-10T23:23:50+0000
HOST: d891117c344e48 Loss% Snt Last Avg Best Wrst StDev
1.|-- 2605:4c40:222:7a70:0:c07c 0.0% 20 1.3 1.5 1.3 1.7 0.1
2.|-- 2605:4c40:222:100::1 0.0% 20 1.5 6.6 1.0 67.1 14.9
3.|-- 2620:107:4000:bc90::f001: 0.0% 20 1.3 1.3 1.1 2.9 0.4
4.|-- 2620:107:4000:b452::f000: 0.0% 20 6.7 6.6 6.5 7.1 0.1
5.|-- 2620:107:4000:cfff::f20d: 0.0% 20 6.1 6.5 5.9 10.3 0.9
6.|-- 2620:107:4000:a891::f000: 0.0% 20 6.6 6.6 6.5 6.7 0.1
7.|-- 2620:107:4000:cfff::f209: 0.0% 20 64.9 14.3 7.1 64.9 14.8
8.|-- ??? 100.0 20 0.0 0.0 0.0 0.0 0.0
and I also ran it again to the non pooled domain:
mtr -r -c 20 ep-patient-sky-a49w7ukr.us-east-1.aws.neon.tech
Start: 2025-11-10T23:25:23+0000
HOST: d891117c344e48 Loss% Snt Last Avg Best Wrst StDev
1.|-- 2605:4c40:222:7a70:0:c07c 0.0% 20 1.3 1.4 1.3 1.7 0.1
2.|-- 2605:4c40:222:100::1 0.0% 20 1.4 6.5 1.0 37.2 10.8
3.|-- 2620:107:4000:cc51::f002: 0.0% 20 1.6 1.6 1.5 1.7 0.1
4.|-- 2620:107:4000:ad90::f000: 60.0% 20 1701. 1721. 1645. 1774. 45.8
5.|-- 2620:107:4000:cfff::f20d: 0.0% 20 31.5 8.6 6.1 31.5 6.4
6.|-- 2620:107:4000:3f90::f007: 0.0% 20 6.6 6.6 6.5 7.1 0.1
7.|-- 2620:107:4000:cfff::f216: 0.0% 20 7.4 16.6 7.1 66.4 17.8
8.|-- ??? 100.0 20 0.0 0.0 0.0 0.0 0.0

and a TCP version:
mtr -T -P 5432 -r -c 20 ep-patient-sky-a49w7ukr-pooler.us-east-1.aws.neon.tech
Start: 2025-11-10T23:28:24+0000
HOST: d891117c344e48 Loss% Snt Last Avg Best Wrst StDev
1.|-- 2605:4c40:222:7a70:0:c07c 0.0% 20 1.6 1.6 1.4 2.8 0.4
2.|-- 2605:4c40:222:100::1 0.0% 20 1.6 3.6 0.9 30.5 7.0
3.|-- 2620:107:4000:bc90::f001: 0.0% 20 1.6 1.7 1.0 3.4 0.5
2001:504:1::a516:509:4

de-cix1.nyc.amazon.com

2620:107:4000:cc51::f002:cc20
2620:107:4000:bc90::f001:e001
4.|-- 2620:107:4000:d0d0::f003: 0.0% 20 1.8 586.6 1.7 3056. 989.2
2620:107:4000:b450::f000:b805
2620:107:4000:b452::f000:b849
2620:107:4000:d0d0::f003:2c0d
2620:107:4000:b450::f000:b807
2620:107:4000:d0d0::f003:2c0f
2620:107:4000:ad90::f000:ec06
2620:107:4000:ad90::f000:ec04
5.|-- 2620:107:4000:b450::f000: 0.0% 20 1038. 112.8 6.3 1038. 311.7
2620:107:4000:ad92::f000:ec48
2620:107:4000:cfff::f20d:7501
2620:107:4000:cfff::f20d:7c81
2620:107:4000:b450::f000:b801
2620:107:4000:b452::f000:b84b
2620:107:4000:cfff::f20d:7d81
2620:107:4000:cfff::f20d:7481
2620:107:4000:cfff::f20d:7401
6.|-- 2620:107:4000:a891::f000: 0.0% 20 6.7 6.9 6.2 10.9 1.0
2620:107:4000:cfff::f20d:7d81
2620:107:4000:a890::f000:a00b
2620:107:4000:a891::f000:a02d
2620:107:4000:cfff::f20d:7c81
2620:107:4000:cfff::f20d:7581
2620:107:4000:cfff::f20d:7d01
2620:107:4000:a891::f000:a02e
7.|-- 2620:107:4000:cfff::f202: 0.0% 20 6.9 671.6 6.3 4099. 1114.8
2620:107:4000:cfff::f202:d81
2620:107:4000:cfff::f202:d21
2620:107:4000:a890::f000:a008
2620:107:4000:a890::f000:a00b
2620:107:4000:cfff::f202:c91
2620:107:4000:cfff::f202:dd1
2620:107:4000:cfff::f202:dc1
2620:107:4000:cfff::f202:c81
8.|-- 2600:1f18:240c:543b:a8aa: 0.0% 20 1015. 566.7 6.4 4087. 1017.8
2620:107:4000:cfff::f202:d81
2620:107:4000:cfff::f202:c41
2620:107:4000:cfff::f202:c61
2620:107:4000:cfff::f202:d91
2620:107:4000:cfff::f202:d61
2620:107:4000:cfff::f202:d41
2620:107:4000:cfff::f202:dc1

mtr -T -P 5432 -n -w -r -c 50 ep-patient-sky-a49w7ukr.us-east-1.aws.neon.tech
Start: 2025-11-10T23:30:28+0000
HOST: d891117c344e48 Loss% Snt Last Avg Best Wrst StDev
1.|-- 2605:4c40:222:7a70:0:c07c:f4f4:0 0.0% 50 1.5 1.5 1.3 3.0 0.2
2.|-- 2605:4c40:222:100::1 0.0% 50 1.2 5.9 0.9 113.2 18.2
3.|-- 2001:504:1::a516:509:4 0.0% 50 2.0 1.7 1.0 5.4 0.6
2620:107:4000:cc51::f002:cc20
2620:107:4000:bc90::f001:e009
2620:107:4000:bc90::f001:e001
2001:504:36::407d:0:2
4.|-- 2620:107:4000:ad90::f000:ec07 0.0% 50 2622. 698.8 1.5 2710. 885.8
2620:107:4000:b452::f000:b84d
2620:107:4000:d0d0::f003:2c0f
2620:107:4000:b452::f000:b84c
2620:107:4000:d0d0::f003:2c0c
2620:107:4000:b452::f000:b849
2620:107:4000:b452::f000:b84e
2620:107:4000:b450::f000:b806
2620:107:4000:b450::f000:b805
5.|-- 2620:107:4000:cfff::f20d:7d01 0.0% 50 10.8 167.3 6.1 1985. 437.8
2620:107:4000:ad92::f000:ec48
2620:107:4000:cfff::f20d:7d81
2620:107:4000:cfff::f20d:7501
2620:107:4000:ad92::f000:ec4a
2620:107:4000:cfff::f20d:7481
2620:107:4000:cfff::f20d:7401
2620:107:4000:ad92::f000:ec4d
2620:107:4000:ad92::f000:ec4b
6.|-- 2620:107:4000:be11::f000:cc2e 0.0% 50 6.5 8.2 6.1 33.9 5.5
2620:107:4000:cfff::f20d:7581
2620:107:4000:be11::f000:cc2d
2620:107:4000:cfff::f20d:7501
2620:107:4000:d2d2::f003:4c42
2620:107:4000:be11::f000:cc2c
2620:107:4000:d2d3::f003:4c67
2620:107:4000:cfff::f20d:7d81
7.|-- 2620:107:4000:be11::f000:cc2f 68.0% 50 6.8 6.8 6.4 7.9 0.3
2620:107:4000:d2d3::f003:4c62
2620:107:4000:d2d2::f003:4c41
2620:107:4000:d2d2::f003:4c42
2620:107:4000:d2d3::f003:4c61
2620:107:4000:be11::f000:cc2d
2620:107:4000:be11::f000:cc2c
2620:107:4000:d2d2::f003:4c43
8.|-- 2600:1f18:240c:5454:82a4:8209:20f8:a5c1 46.0% 50 6.9 7.0 6.8 7.2 0.1

and now the connection latency (using connectable.connect) is working fine, no 10 minute wait until I can run a query.

I just noticed that this is IPv6 – one of our upstreams is indeed having occasional problems with IPv6 speeds lately, that could be related. If it’s possible to use IPv4 only instead, that might solve your issue for now. We’re currently working with them to resolve IPv6 problems.

ah, ok, I will try that, because the issue just came back and trying to grab mtr output I got:
mtr -T -P 5432 -n -w -r -i 0.5 -c 60 ep-patient-sky-a49w7ukr.us-east-1.aws.neon.tech
Start: 2025-11-11T00:44:17+0000
HOST: 6839293a6e2458 Loss% Snt Last Avg Best Wrst StDev
1.|-- 2001:19f0:1000:b16a:0:514c:7c86:0 35.0% 60 7118. 419.6 1.2 7118. 1455.1
2.|-- ??? 100.0 60 0.0 0.0 0.0 0.0 0.0

Hello @daniel-horizon3
We have confirmed that you are encountering an issue with one of our upstreams. While their team is working on resolving the issue long-term, they have also applied a hotfix. My tests indicate that this fix has worked (for now).
Is the issue resolved for you?

ok, thanks for letting me know, I will monitor today and let you know if I see this issue crop up again.

I’m also experiencing intermittent issues connecting from my EWR server to a Neon database, this one in us-east-2. Is there any data on how that upstream is performing?

Is it possible to share your app name? We can confirm whether you’re hit with the same issues as the OP. If you could force IPv4 connectivity to your Neon DB, that should also help to narrow down whether this is what you’re experiencing.

@PeterCxy The affected app name is mhh-api-6198.

This app shouldn’t be affected by the same issue as described in this thread. Do you know if the connectivity issue is the connection being slow, or not reaching the other side at all? Might be helpful to try forcing IPv4 anyway even though it shouldn’t be affected the same way :slight_smile:

@PeterCxy It certainly does seem like IPv6 is the issue. When tunneling in via fly console, the following script starts to fail pretty quickly.

while true; do
printf "\n"; date
time nc -vz -6 -w 30 ep-fragrant-violet-a53h6t07.us-east-2.aws.neon.tech 5432 \
|| echo "IPv6 connection FAILED"
sleep 1
done

This works fine when running in IPv4 (by way of nc -vz -4).

Admittedly, things have been looking significantly better over the past couple hours. In contrast, yesterday saw something like 10-15% of all attempted Neon connections timing out.

Hi, just to provide an update on this: for the OP’s case where the connection is slow due to an upstream provider, we’ve applied a band-aid fix that should mitigate the problem somewhat. Do let us know if IPv6 still gets extremely slow!

For @djs-mhh’s case, which is unrelated, I’m still not sure what is going on here. It seems like some of our IPv6 traffic is being dropped AWS-side, but I am not 100% sure and we’re still investigating. We aren’t seeing connectivity problems to AWS ourselves on the host(s) running the app. In the mean time, if it is possible to force the app to use IPv4 only to connect to the database on AWS, that should be a temporary workaround.

Yes, I have been watching our app, and have not seen any issues after the fix. I did add this to our dockerfile as well:

RUN echo"precedence ::ffff:0:0/96 100">> */etc/gai.conf

which may have been a bit overkill, but I had issues in our app just toggling IPv4 for only the database.*

Gotcha, thanks for the update! It looks like the last time we saw issues on our side was somewhere between 5:00am and 10:00am EST. Otherwise, it looks like things have been fairly quiet today.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.