Run flyctl deploy --remote-only --config ./apps/prod/fly.toml
==> Verifying app config
Error server returned a non-200 status code: 504
Same.
Yep…production broken again. How frustrating.
Hello, we experienced an issue with our deployment cluster that caused a ~10 minute interruption, it should be stable again so you can continue your deployments. Sorry for the inconvenience!
Appreciate the rapid response, releases appear to be going through now but stuck/taking much longer than usual on health checks, confirming if that’s expected.
This says it’s resolved on the status page, but I’m still having deployments stuck at pending. @wjordan Do you have an update? This is blocking a critical path to hotfixing or application today.
This has just reoccured again. It has managed to start one instance thankfully. Multi region + multi instance taken down without a re-deployment seems extremely sketchy.
Im also having deployment failures in the MIA region. Also, the fly.io dashboard will not load.
I just got a successful deploy to LAX
Here it also went back to normal.
Hi folks, sorry for the inconvenience. The service became unstable again and caused a second interruption to deployments. It should be once again stable, and we’re still actively monitoring and investigating the root cause to try and prevent any further issues.
@bkspace you’re experiencing an unrelated application issue- your app is exiting with an out of memory error which is why it keeps restarting (apps will automatically restart after they crash, and re-deploy if they crash quickly). You can find the exact error in your app logs.
@wjordan I have not been able to deploy all day.
This is what I see after it takes super long to build:
[+] Building 815.4s (1/1) FINISHED
=> ERROR [internal] load remote build context 815.4s
------
> [internal] load remote build context:
------
Error failed to fetch an image or build from source: error building: error during connect
I’ve also tried with --local-only
Thank you, we’re looking into that. Strange as the app has been running smoothly for weeks afaik - perhaps all the traffic going to one instance, due to the restarts, is causing the issue.