I deployed Strapi to fly using the following guide:
Everything works except any images that I upload in the CMS admin panel serve correctly at first, but after a short period of time, the images are 404.
This goes for images like the application icon, as well as images in the media library for my blog posts, etc.
Am I missing some kind of configuration or some other specific thing for images to not become 404?
if you are using docker (which I assume based on your Node version) then you need a persistent disk for the container. Most likely you don’t have one and when you redeployed the images were wiped since the in memory disk was deleted
I’m not redeploying, but maybe this is related?
If this is the issue, how do I set a persistent disk on fly.io for the images in strapi?
The deploy worked. The volume was created. But the images are still disappearing after a few minutes.
When I ssh into the console, I see the images I uploaded in the /public/uploads folder when I run ls inside of it. But, after a few minutes, when I run ls, it returns 0 files. The files themselves are being removed.
That’s very odd. We don’t do any sort of modifications on contents of volumes so here’s a few things to investigate:
Can you check if you have more than one machine in that app? Volumes are not distributed so if you have one machine containing the uploads but you ssh into the other one that makes sense why they disappear.
Can you verify if images disappear after your machine stops and later you start it by opening your website? That’s an evidence that the volume is not set up properly.
Is it possible there’s some strapi config that delete those?
This might be it. It happens whenever I leave it alone for a few minutes.
I’m pretty certain it’s not a Strapi config setting.
So if it’s the volume not being set up correctly, how might I go about triaging/fixing it? I’ve googled and there is no information about how to set fly up for Strapi outside that one thread I linked to above.
I checked our admin dashboard and I can see your apps have two machines and the app I think is the backend seems to have only one destroyed volume. Here’s my suggestion:
fly scale count 1: make sure only one machine exists
Add back the volume config to your fly.toml
fly deploy and follow any instructions it gives you to make the volume be created.
After that do a file upload and verify it works. You can stop your machine using fly machine stop MACHINE_ID then fly machine start MACHINE_ID and verify that your file is still there.
Ok so I monitored the logs closely this time while I waited (did not shut down machine manually, it shut down on its own) and this is what I saw:
2024-03-11T13:19:24.369 proxy[56833022c07638] nrt [info] Downscaling app react-japan-cms from 1 machines to 0 machines, stopping machine 56833022c07638 (region=nrt, process group=app)
2024-03-11T13:19:24.371 app[56833022c07638] nrt [info] INFO Sending signal SIGINT to main child process w/ PID 314
2024-03-11T13:19:29.371 app[56833022c07638] nrt [info] INFO Sending signal SIGTERM to main child process w/ PID 314
2024-03-11T13:19:34.557 app[56833022c07638] nrt [warn] Virtual machine exited abruptly
2024-03-11T13:19:44.909 proxy[56833022c07638] nrt [info] Starting machine
2024-03-11T13:19:45.097 app[56833022c07638] nrt [info] [ 0.038854] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
2024-03-11T13:19:45.132 app[56833022c07638] nrt [info] [ 0.050879] PCI: Fatal: No config space access function found
2024-03-11T13:19:45.316 app[56833022c07638] nrt [info] INFO Starting init (commit: 913ad9c)...
2024-03-11T13:19:45.330 app[56833022c07638] nrt [info] INFO Mounting /dev/vdb at /public/uploads w/ uid: 0, gid: 0 and chmod 0755
2024-03-11T13:19:45.332 app[56833022c07638] nrt [info] INFO Resized /public/uploads to 3204448256 bytes
2024-03-11T13:19:45.333 app[56833022c07638] nrt [info] INFO Preparing to run: `docker-entrypoint.sh npm start` as root
2024-03-11T13:19:45.342 app[56833022c07638] nrt [info] INFO [fly api proxy] listening at /.fly/api
2024-03-11T13:19:45.348 app[56833022c07638] nrt [info] 2024/03/11 13:19:45 listening on [fdaa:6:7f89:a7b:22e:287:49:2]:22 (DNS: [fdaa::3]:53)
2024-03-11T13:19:45.360 runner[56833022c07638] nrt [info] Machine started in 437ms
As soon as I reopened the media library page, all the images were 404.
Based on what I’ve seen in the past, I believe that when Strapi starts up, it’s overwriting the public/uploads folder with what is initially in the docker container.
I say this because when I add images when testing Strapi locally and deploy, the images are always there in the public/uploads folder (but the Strapi db on production doesn’t reference them at all) when the server gets spun up again.
I don’t have a lot of Docker experience, but perhaps it’s this in the Dockerfile.
# Copy the application files
WORKDIR /opt/app
COPY ./ .
Maybe that’s overwriting all the files including the public/upload.
It’s obviously important for first-run, but on subsequent uploads, I think it should only copy the public/uploads folder if there isn’t one already. Not sure how to write the Dockerfile to do this.
I actually tried that before and Strapi failed to startup and said it was because the public/uploads folder was missing. I’ll try it again now that the folder exists.
I uploaded some images and then manually took a snapshot of the volume. I’ll see if the snapshot maintains the images next time the machine dies and starts again.
It did not work. Images still gone after machine dies and starts again.
What’s weird is that the images stayed when I manually stopped the machine and restarted it, but when it is killed automatically, the images get removed.
On the off chance that Docker was overwriting the entire folder structure with COPY ./ ., I changed my Dockerfile to manually copy each file and folder one by one except the public one.
The files are still gone when the machine dies on its own and restarts.
So, it’s for sure not the Dockerfile, and manually stopping and starting the machine maintains the files.
It looks like the issue is on the fly side. When the machine is killed on its own and starts back up again, it only starts with whatever files were originally deployed with fly deploy. It does not maintain any files that are added afterwards.