[error] 538#538: *6 upstream sent too big header while reading response header

We are on a hobby plan testing out a simple Laravel application using octane (swoole).

While using Laravel Socialite library, on the accounts.google.com callback we get a 502 Bad Gateway with the error:

[error] 538#538: *6 upstream sent too big header while reading response header from upstream, client: 172.16.141.90, server: _, request: "GET /auth/google/callback?[...] HTTP/1.1", upstream: "http://127.0.0.1:8000/auth/google/callback?[...]", host: "dealstalker.io", referrer: "https://accounts.google.com/"

We have found this is an issue with nginx’s fast_cgi configuration: php - how to fix upstream sent too big header while reading response header from upstream? - Stack Overflow

Is there a way to manage these settings or is this a limitation of the hobby plan?

Regards.

Hello!

TL;DR: I believe I’ve fixed this for you, so all you should need to do is deploy again. I believe that should pick up the changes made here: resolve "upstream sent too big header" error by fideloper · Pull Request #2 · fly-apps/laravel-docker · GitHub

(This error is NOT related to anything specific to Fly, and Fly doesn’t limit your account in this manner).


Some more notes on what’s going on here:

Luckily, that error is not related to any Fly.io limits, it’s a matter of configuring the Nginx settings that are built into the Docker stuff that is created via the fly launch command.

It is possible to fix that yourself, but I’ll bet others hit this error too - so I added a fix - the configuration is found in this repo: GitHub - fly-apps/laravel-docker: Base Docker images for use with Laravel on Fly.io

The Config

The fast_cgi config works for non-octane Laravel, but Octane doesn’t use php-fpm (and thus fcgi config isn’t related).

So instead we need something else there - the equivalent nginx config for “regular” http proxying is proxy_buffers and proxy_buffers_size.


After deploying, to confirm the configuration is there, you can do this:

# SSH into the instance
cd /path/to/your/laravel/project
fly ssh console

# View the nginx config:
cat /etc/nginx/sites-enabled/*

You should see the buffer configurations present there in that file.

Let me know if you don’t see it!

(I just saw it on a new application I launched after this change, but I want to make sure your deployment picks up the latest base Docker image that has the new configuration).

1 Like

Thanks for the amazing answer, it looks like it pulls the image alright but it gets stuck on the health check:

No machines in group app, launching one new machine
  [1/1] Waiting for 48edd71a34e748 [app] to become healthy: 0/1
WARN failed to release lease for machine 48edd71a34e748: lease not found
Error: timeout reached waiting for healthchecks to pass for machine 48edd71a34e748 failed to get VM 48edd71a34e748: Get "https://api.machines.dev/v1/apps/dealstalker/machines/48edd71a34e748": net/http: request canceled

By looking at the monitoring tab its looping trying to start nginx:

2023-04-27T13:25:52.448 app[48edd71a34e748] mad [info] 2023-04-27 13:25:52,448 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2023-04-27T13:25:52.449 app[48edd71a34e748] mad [info] 2023-04-27 13:25:52,448 INFO exited: nginx (exit status 1; not expected)

2023-04-27T13:25:53.451 app[48edd71a34e748] mad [info] 2023-04-27 13:25:53,450 INFO spawned: 'nginx' with pid 549

2023-04-27T13:25:53.460 app[48edd71a34e748] mad [info] nginx: [emerg] "proxy_busy_buffers_size" must be equal to or greater than the maximum of the value of "proxy_buffer_size" and one of the "proxy_buffers" in /etc/nginx/nginx.conf:67

2023-04-27T13:25:54.317 proxy[48edd71a34e748] mad [error] failed to connect to machine: gave up after 15 attempts (in 7.998565663s)

2023-04-27T13:25:54.462 app[48edd71a34e748] mad [info] 2023-04-27 13:25:54,462 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

2023-04-27T13:25:54.462 app[48edd71a34e748] mad [info] 2023-04-27 13:25:54,462 INFO exited: nginx (exit status 1; not expected)

Thank you! Looks like my Octane-specific settings there are no good.

I’ll adjust that now and let you know.

Yes, it looks like the proxy_busy_buffers_size has to be bumped up, at first i got juked by the [info] tag on the fly logs, the [emerg] gave it away tho! :smiley:

Thanks for your help!

1 Like

Actually I’m going to remove that setting instead of bump it up - I read that Nginx calculates is automatically if you don’t try to set it explicitly - so let’s try that first!

When this GHA finishes (5-10m), you should be good to try again:

1 Like

(Let me know if that doesn’t work for you!)

Yes, I was testing some stuff, it works like a charm now, thank you for the help! :smiley:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.