Solution for scale when need to persist uploaded images

Fly recommends to always deploy at least 2 machines and at least 2 volumes per machine. But we know new volumes won’t sync when scale and the docs says the app itself have to managed it.
But sometimes we need to deploy a ready image or even build an app from the source that hasn’t been built with fly in the mind.
For example, if my app need to mount images to /app/images/upload and after some usage I need to scale, how can I keep the volumes synced? I understanding from the docs that there is no solution.

Why not use S3 or Tigris (R2)? Those were designed for use cases like this.

I see, because sometimes we need to deploy something ready from the OS community and no room/time for further development, need to plug and play fast. Most of the OS projects rely on Docker compose and persistent volumes, so it’s the case here.
Fly is great for who dev but also good for who dockerfile and ready images deploy.

You can always do fly volumes fork to clone it to attach to a new machine :person_shrugging:
Are you referring to docker images, or user uploaded images?

Kinda of both. To pull a Docker image that needs a persistent volume for dealing with user uploaded images.
From this Docs references I understood it’s a problem.

Warning: Always provision at least two volumes per app. Running an app with a single Machine and volume leaves you at risk for downtime and data loss. Volumes don’t have built-in replication between them, so your app or database needs to take care of replicating data between volumes.

  • Develop your app to handle replication: Volumes are independent of one another; Fly.io does not automatically replicate data among the volumes on an app, so if you need the volumes to sync up, then your app has to make that happen.
  • Create redundancy in your primary region: If your app needs a volume to function, and the NVMe drive hosting your volume fails, then that instance of your app goes down. There’s no way around that. You can run multiple Machines with volumes in your app’s primary region to mitigate hardware failures.

The fly scale count command creates new empty volumes, or attaches existing volumes, and does not copy or move any data between volumes.

Important: After you fork a volume, the new volume is independent of the source volume. The new volume and the source volume do not continue to sync.

Recommended only if your app handles replication) Clone the first Machine to scale out to two Machines with volumes

If you need to take something relatively unmodified and throw it up quickly, you can do it without high availability just by specifying --ha=false at deploy time, or using fly scale count 1 -a [app] to reduce the machine count. You just have to accept the risk that datacenter hardware can fail, so you need your own backups/process in place for handling it.

If you’re not able to hack on the software you’re deploying, but you are able to work on its runtime environment a bit, there are some FUSE mount solutions like AWS Mountpoint, JuiceFS or s3fs-fuse. These would allow you to connect to a remote object storage, but serve its files as if from a local directory. This isn’t as fast as a genuine integration with your object storage when you’re serving files to clients, but it shouldn’t require rewriting any part of the software. Maybe that will work for your case.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.