I’m on an M1 Mac, so I have to use the remote builder. For the past several hours, I’ve received the message Error unauthorized: not allowed
during the push stage. It was intermittent at first, but now it’s every time I try to deploy. If I try restarting my app, I get Pulling image failed
. My app is currently down. Did I hit some type of rate limit? My region is DFW.
I do confirm that I’m encountering the same errors.
I have the same issue on my CIs. The first time I do a remote deploy, it fails but the second time it passes.
EDIT: Reran build and it still failed so I think its random when it decides to work and when it doesnt
I can report te same problem.
Same for me. Previously reported via Github, too: flyctl command line fails to deploy application due to an authentication error with image registry · Issue #362 · superfly/flyctl · GitHub
Seeing same issue from our Github Actions deploys
Looking into this in a minute
This should now be fixed.
Very sorry about the issue. More details will come, but for now: We’ve had to rotate a password for our database cluster and didn’t realize our registry proxy required a change too.
Can confirm our deploys are working again. Thanks #hugOps
I’ve added a status page event as well as a postmortem: Fly.io Status - Image registry throwing 500 and displaying as "unauthorized"
While I can deploy my local image again, the freshly deployed instance does not start:
2021-02-04T12:34:37.109Z ... fra [info] Starting instance
2021-02-04T12:34:37.146Z ... fra [info] Configuring virtual machine
2021-02-04T12:34:37.148Z ... fra [info] Pulling container image
2021-02-04T12:34:45.447Z ... fra [info] Unpacking image
2021-02-04T12:35:57.427Z ... fra [info] Pull failed, retrying (attempt #0)
Would that be related?
Another manual deployment of the same version was successful.
This is a different problem. I noticed this issue and marked a specific server as “uneligible” with our scheduler.
Working for me as well now! Has anyone else noticed that the layers arent cached the first time the server boots up after a while? I thought there was a persistent volume behind the scenes so the cache would always be available?
Sometimes we need to clean up the cache to free up some space. Each server has its own cache. We can optimize the cleanup eventually to only remove content that’s not useful anymore (like layers from old versions of an app).
They are cached though. If your app instance gets scheduled on the same server twice during a range of a few days, it should be fine. Most servers have not needed their caches to be purged for weeks. That said, we have multiple servers in every region, if your VM gets scheduled on a different server then the cache will likely be cold.
I have deploys 1 day (24 hours) apart and they don’t seem to use the cache. Maybe they were too far apart though, I’ll report back if it happens again
I just realized: are you talking about the “pulling” and “unpacking” log messages? These appear even if the image is cached. We could change that behaviour with a bit of logic though (maybe).
I was talking about layer caching. If I have a curl command to install something and none of the previous steps are changed, it shouldnt even bother to re-run that command and instead just use the cache. However with two builds 24 hours apart I see that it didnt use the cache
@nahtnam Are you referring to the layer cache on a remote builder?
Yes. I just tested it out. Deployed once using a remote builder. Took 4 minutes (did not use cache). Then I waited an hour and pushed again and it still took 4 minutes (and did not use cache). I thought it would cache the layers so that the build would run faster. These are my first few lines:
FROM node:alpine
# If possible, run your container using `docker run --init`
# Otherwise, you can use `tini`:
# RUN apk add --no-cache tini
# ENTRYPOINT ["/sbin/tini", "--"]
WORKDIR /app
# If you have native dependencies, you'll need extra tools
# RUN apk add --no-cache make gcc g++ python
RUN apk add --no-cache build-base python curl
RUN npm config set python /usr/bin/python
RUN (curl -Ls https://cli.doppler.com/install.sh || wget -qO- https://cli.doppler.com/install.sh) | sh
They never change so it should always use the cache right? Only the next line where I copy code over is when it should not use cache if there are any changes