First, it looks like you are using https://my-api.fly.dev for your API. The nginx is failing to get a response back, resulting in the 502. Can you access https://my-api.fly.dev ? If you type https://my-api.fly.dev does that work?
If not, that explains the issue - if you can’t access it, nginx won’t be able to either. So you’d need to debug why the API app isn’t working. For example by running flyctl logs for that app.
If the API is running but nginx can’t access it, perhaps you have https://my-api.fly.devset to not respond to the public. No idea how that app is set up with its fly.toml. But if it is not set to respond to public requests with an exposed 443 from e.g from the tls handler provided by Fly, you would need to proxy to it from your nginx app using the private .internal domain instead. For example:
proxy_pass http://my-api.internal:8080;
(or whatever its internal_port value is set as).
The other possible issue is setting the Host header proxy_set_header Host $host Not sure that is needed. Again, it would depend on your API and what it’s expecting.
Then try connecting to each domain and check it works as expected.
So setting proxy_pass http://my-api.internal:3000; in nginx.conf works fine. But when all machines in my-api are suspended then, the mydomain.com times out with 504.
I changed the fly.toml to min_machines_running = 1 Hope this solves it. Is there any other approach? Thanks in advance.
You can either proxy using the public name.fly.dev or your private name.internal one. It depends how you want it to work, whether you want the API available to requests not using custom-domain.com etc. That’s all up to you.
But yes, you will get a 502 or 504 (I always forget which is which) from nginx if it tries to proxy from the upstream (in this case the api) and doesn’t get a response within its expected time. It won’t wait forever and gives up. I would assume its request is hitting while there are 0 machines, and by the time one starts running, it has given up and returned an error code.
So yep, the solution to that would either be to keep at least one api machine running at all times or increase the nginx proxy timeout value to Xs (to allow time for one to start).