Any recommendation for handling of nginx DNS caching when used as reverse-proxy

Hi everyone,

Just checking if anyone had similar issues when nginx is used as a reverse proxy:

I basically have one app A running nginx and using proxy_pass to forward /api requests to another app B using app-name.internal as host.
What I noticed is if I redeploy app B and thus its ipv6 address is now different; nginx in app A seems to still be using the stale ipv6 address from the previous instance. I checked with dig in the nginx instance that the ipv6 address is up to date but it seems that likely nginx in this case is caching the dns lookup result.

Any recommendation of how to handle this, should the deploy of app B just restart app A to clear cache or is there a better way to handle this.

I doubt caching is the problem as I see a TTL of just 5s on .internal queries.

/ # nslookup -debug <app>.internal 
Server:	fdaa::3
Address:	fdaa::3#53

Non-authoritative answer:
	<app>.internal, type = AAAA, class = IN
    ->  <app>.internal
	has AAAA address fdaa:0:dead:aaaa:b3ef:1111:f2:2
	ttl = 5
    ->  <app>.internal
	has AAAA address fdaa:0:dead:aaaa:be3f:3333:f4:2
	ttl = 5
    ->  <app>.internal
	has AAAA address fdaa:0:dead:aaaa:b33f:5555:f8:2
	ttl = 5
    ->  <app>.internal
	has AAAA address fdaa:0:dead:aaaa:beef:7777:f16:2
	ttl = 5

Connection pooling could be the issue here… may be ICMP msgs aren’t approp sent to ngnix for it know the addresses previously reachable are now gone?

pinging @charsleysa, they may know what’s happening here.

btw, if you want private-ip addresses to (mostly) remain the same after deploys, you could employ this hack trick outlined here: Can an instance have a persistent network identity? - #7 by kurt

@ignoramous I don’t think the TTL matters if nginx itself is caching the DNS lookup result right ? and that’s what I’m suspecting here because I get the right result when using dig but I see in the access logs that nginx is using a different IP so it must be something in the nginx handling of DNS lookup which I suspect it does cache lookups just like JVMs etc …

Though now that I looked at nginx docs I think Module ngx_http_core_module might get around my problem if I set the resolver to fdaa::3, forcing nginx to go through the fly dns resolver on every request using the same 5s TTL.
I’ll try that and reply back if it works

1 Like

Per this serverfault answer, Nginx v1.1.9+ shouldn’t cache DNS answers for longer than its TTL. Per this stackoverflow answer, however, Ngnix caches DNS answers forever for fixed hostnames.

Update: seems to work, just added

server {
  resolver [fdaa::3] ipv4=off valid=5s;
  set $gtw "http://${GATEWAY_NAME}.internal:3000";
  (rest of config ...)

Then $gtw is used for proxy_pass directive.
probably the aditional TTL is not needed as nginx should follow the DNS record TTL


@amine you can also add it to the http level config so you don’t need to apply it to every server config, that’s what we do since we handle various subdomains using the same fly app.

http {
    # Use NGINX's non-blocking DNS resolution
    resolver [fdaa::3]:53;
1 Like