Nginx + Next.js Deployment Issues

I have a Next.js app that I have put behind an Nginx proxy in the same container, so that I can get logging of request. During deployments, we are running into problems with 502s and 404s.

The 502s happen because Fly starts sending requests to Nginx before Next has started and the “no live upstreams while connecting to upstream” error happens. I have a Fly http check in place that fails till Next is up and working. As a side note, I get a weird “connect() failed (111: Connection refused) while connecting to upstream” error on occasion that doesn’t seem to cause 502s.

The 404s happen because, as far as I know, the Next.js builds are unique and create unique static files. When there is a new deployment, the old static files no longer exist and the clients get 404s until they reload their page. This gets worse when say the home page is cached and refuses to reload for some time.

Does anyone have suggestions on fixing these two problems?

Possible Solutions

  • 502s:
    • Add other regions as fallbacks to the localhost Next server till healthchecks succeed.
  • 404s:
    • Add last releases’ static files to new release.

Nginx config:

worker_processes auto;

events {
    worker_connections 65536;
    use epoll;
    multi_accept on;

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;

    client_max_body_size 50m;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;

    server_tokens off;
    gzip off;

    server {
        listen 3002;
        listen [::]:3002;

        location / {
            proxy_pass http://localhost:8138;

    log_format json escape=json '{'
    access_log /dev/stdout json;
error_log stderr error;

In case anyone reading this is interested in our solution, here is what we did:

  • For the 502s, first we set the output: 'standalone' setting in next.config.js with appropriate changes to our dockerfile. This reduced our image from 1.15GB to 338MB. Next, we added custom logging to the server.js file that is produced by next build. The logging was the whole reason why we were using Nginx in the first place, so the custom logging allowed us to remove Nginx. This, in turn, reduced our startup time by 2s and I hope no more 502s.
  • For the 404s, we implemented the usage of a CDN for the static files and set it up to keep several versions of the static files.
1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.