TLS alert 49 (access denied) when app connects to its own public hostname

I’m running a Rails app on Fly (lhr) that monitors websites for uptime and SSL certificate expiry. During early development I have it monitoring itself — I know that’s a bit circular and not a long-term plan, but it was working fine until recently so the change in behaviour caught my attention.

We had a certificate renewal issue — our registrar (Porkbun) had its own cert enabled which was adding competing DNS records that blocked Fly’s ACME validation. We removed the old cert, disabled the Porkbun one, cleaned up DNS, and manually provisioned a new Fly-managed certificate.

The new cert works perfectly for all external traffic (though the old one kepy getting served until it expired) — the site is completely fine from the outside. But since the original deleted certificate’s expiry date passed, outbound TLS connections from within the app to our own public hostname fail with:

SSL_connect returned=1 errno=0 peeraddr=[2a09:8280:1::b9:80b7:0]:443 state=error: tlsv1 alert access denied (SSL alert number 49)

The connection reaches the proxy (it’s a TLS alert, not a timeout), so it seems to be actively rejecting rather than being unreachable.

What I’ve tried:

  • Forcing IPv4-only DNS resolution with SNI set — same error on both v4 and v6
  • Redeploying the app

I realise self-monitoring is a silly use case and I’ll stop doing it, so this isn’t urgent. But I’m wondering if it points to something deeper — like the proxy’s internal TLS state getting stuck during the provisioning trouble and not recovering. Should I be concerned, or is serving TLS to connections originating from within the same app just not something the proxy supports?

Hmm, connecting to itself from within an app is definitely supported. I tried to access your app using curl -6 on the exact physical host running your machines, and it seemed completely fine as well from my side. Do you happen to know what TLS versions and cipher suites your client supports?

Thanks for checking! I shelled into the solidq machine and tried curl directly — same result, so it’s not Ruby/OpenSSL-specific:

IPv6 (default):

root@2879440c311938:/rails# curl -v https://stayupfront.com 2>&1 | head -30
* Host stayupfront.com:443 was resolved.
* IPv6: 2a09:8280:1::b9:80b7:0
* IPv4: 66.241.124.115
*   Trying [2a09:8280:1::b9:80b7:0]:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS alert, access denied (561):
* TLS connect error: error:0A000419:SSL routines::tlsv1 alert access denied
* closing connection #0
curl: (35) TLS connect error: error:0A000419:SSL routines::tlsv1 alert access denied

IPv4:

root@2879440c311938:/rails# curl -4 -v https://stayupfront.com 2>&1 | head -20
* Host stayupfront.com:443 was resolved.
* IPv6: (none)
* IPv4: 66.241.124.115
*   Trying 66.241.124.115:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS alert, access denied (561):
* TLS connect error: error:0A000419:SSL routines::tlsv1 alert access denied
* closing connection #0
curl: (35) TLS connect error: error:0A000419:SSL routines::tlsv1 alert access denied

Same result on both.

Happy to run anything else from within the machine if it helps.

Hi! It looks like this is indeed a problem on that host running your machine, for whatever reason it did not update its in-memory cache of your certificate and is holding on to that expired one (which of course it can’t serve). I’ve manually restarted it and it seems to be working fine now. Since your site is working fine outside, this is probably some edge case that our code didn’t handle – I (or one of us) will look into this deeper on Monday.

I’ve re-enabled my apps monitors, and they are indeed all working fine now – thanks for looking into this!

For info, I actually deleted the cert that would later expire just before this new one was generated, so shortly before Fri, 06 Mar 2026 14:23:48 GMT. My guess is there’s something around that one not being removed, as that deleted cert continued to be served right up until it expired.

Thanks again, have a great weekend, Rob

Hi, just to close the loop here, we’ve just deployed a fix and this issue should (hopefully) not happen again.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.