Laravel Octane 502 Bad Gateway When Machine Starts

I am trying to run a Laravel app with octane (roadrunner in this case). I am using the default config and docker setup from fly launch. It all seems to work pretty well however the request that causes a machine to start up always returns a 502 Bad Gateway.

2024-02-17T14:38:13Z proxy[784e111b204428] iad [info]Starting machine
2024-02-17T14:38:13Z app[784e111b204428] iad [info][    0.072400] PCI: Fatal: No config space access function found
2024-02-17T14:38:14Z app[784e111b204428] iad [info] INFO Starting init (commit: 1a9b032)...
2024-02-17T14:38:14Z app[784e111b204428] iad [info] INFO Preparing to run: `/entrypoint` as root
2024-02-17T14:38:14Z app[784e111b204428] iad [info] INFO [fly api proxy] listening at /.fly/api
2024-02-17T14:38:14Z app[784e111b204428] iad [info]2024/02/17 14:38:14 listening on [fdaa:5:c47e:a7b:1db:8b9b:7702:2]:22 (DNS: [fdaa::3]:53)
2024-02-17T14:38:14Z runner[784e111b204428] iad [info]Machine started in 627ms
2024-02-17T14:38:15Z proxy[784e111b204428] iad [info]machine started in 1.633875101s
error.message="instance refused connection. is your app listening on 0.0.0.0:8080? make sure it is not only listening on 127.0.0.1 (hint: look at your startup logs, servers often print the address they are listening on)" 2024-02-17T14:38:16Z proxy[784e111b204428] iad [error]request.method="GET" request.id="01HPVR1V8G3WSBK8Q2KM3ZF8HP-iad"
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,920 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,920 INFO Included extra file "/etc/supervisor/conf.d/nginx.conf" during parsing
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,920 INFO Included extra file "/etc/supervisor/conf.d/octane-rr.conf" during parsing
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,923 INFO RPC interface 'supervisor' initialized
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,923 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2024-02-17T14:38:16Z app[784e111b204428] iad [info]2024-02-17 14:38:16,924 INFO supervisord started with pid 306
error.message="instance refused connection. is your app listening on 0.0.0.0:8080? make sure it is not only listening on 127.0.0.1 (hint: look at your startup logs, servers often print the address they are listening on)" 2024-02-17T14:38:17Z proxy[784e111b204428] iad [error]request.method="GET" request.id="01HPVR1V8G3WSBK8Q2KM3ZF8HP-iad"
2024-02-17T14:38:17Z app[784e111b204428] iad [info]2024-02-17 14:38:17,927 INFO spawned: 'octane' with pid 322
2024-02-17T14:38:17Z app[784e111b204428] iad [info]2024-02-17 14:38:17,928 INFO spawned: 'nginx' with pid 323
2024-02-17T14:38:18Z proxy[784e111b204428] iad [info]machine became reachable in 3.342377867s
2024-02-17T14:38:18Z app[784e111b204428] iad [info]   INFO  Server running…
2024-02-17T14:38:18Z app[784e111b204428] iad [info]  Local: http://0.0.0.0:8000
2024-02-17T14:38:18Z app[784e111b204428] iad [info]  Press Ctrl+C to stop the server
2024-02-17T14:38:18Z app[784e111b204428] iad [info]2024/02/17 14:38:18 [error] 324#324: *1 connect() failed (111: Unknown error) while connecting to upstream, client: 172.16.9.162, server: _, request: "POST /livewire/update HTTP/1.1", upstream: "http://127.0.0.1:8000/livewire/update", host: "laraoctanefly.fly.dev", referrer: "https://laraoctanefly.fly.dev/"
2024-02-17T14:38:18Z app[784e111b204428] iad [info]172.16.9.162 - - [17/Feb/2024:14:38:18 +0000] "POST /livewire/update HTTP/1.1" 502 552 "https://laraoctanefly.fly.dev/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "75.139.127.230"
2024-02-17T14:38:19Z app[784e111b204428] iad [info]2024-02-17 14:38:19,498 INFO success: octane entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-17T14:38:19Z app[784e111b204428] iad [info]2024-02-17 14:38:19,498 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-02-17T14:38:28Z app[784e111b204428] iad [info]172.16.9.162 - - [17/Feb/2024:14:38:28 +0000] "GET / HTTP/1.1" 200 7367 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36" "75.139.127.230"

I have the same issue and I think it’s due to Nginx not waiting long enough. It throw a 502 instead of waiting more.

@AsymetricalData good to know I’m not the only one here. Seems the default fly docker setup might still have a few edge cases. Up to this point I created a new simplified Dockerfile specific to running dunglas/frankenphp and everything seems to be running ok so far outside of what I believe is a livewire ↔ octane issue. Start up is really fast and there is no nginx to worry about.

Yes because, correct me if I’m wrong, Nginx prefer to respond to the client instead of waiting.
In the other hand, Caddy, shipped with frankenphp, wait more before sending any response.

Hey there!

This does in fact appear to be an issue of Nginx sending a request to Octane before Octane has started.

In fact, it’s sending the request before Octane is able to listen for requests, so increasing timeouts in Nginx doesn’t even work (Nginx just sees it as nothing being there, rather than sometime is listening but not responding in time).

So, I’ve devised a fix that appears to work. It’s not my favorite fix, but I’m asking around to see if there’s something better (not just on X :stuck_out_tongue: )

Here’s what I did:

  1. Updated the Nginx config with The Fix™
  2. Over-wrote the Nginx config loaded by the Dockerfile

I’ve included a gist bc typing it all out here is very noisy, but you can try this out.

1 Like

(if that works fine for you, I’ll update flyctl so the fix gets there too. or unless I find some better way to do it)

Correct caddy replaces nginx in this case.

Thanks for looking into this. I can confirm at least from my little test app that the fix, fixed it :metal:

1 Like

Sorry to bring up an old thread… but do we know if this behavior is the same on Nginx Unit? I’ve been having issues with my Octane setup with this exact case, but am debating using Nginx or switching to Nginx Unit.