We’re releasing persistent disks soon, you can try them out now if you’re curious. flyctl 0.0.145-beta-3 introduces a volumes command you can use to provision permanent disks for your applications.
Volumes are persistent, maintain their data between deploys, and even stick around if your app is suspended.
Creating a volume is simple, this creates a 25GB volume in ewr: flyctl volumes create <name> --region ewr --size 25
To mount this volume at /data, add this to your app config:
[[mounts]]
source = "data"
destination = "/data"
You can create multiple volumes with the same name, if you want to run an app in 3 regions, run these commands (and keep [[mounts]] in your config the same):
flyctl volumes create data --region ewr --size 25
flyctl volumes create data --region cdg --size 25
flyctl volumes create data --region syd --size 25
Some limitations:
You can only mount one volume in a VM (we’re curious if you need more)
We don’t have snapshots/backups available yet
Your app can only scale to as many VMs as you have volumes. If you’ve created 3 volumes named “data”, your app can scale to 3 VMs.
Other things to know:
The price is $0.15 per GB per month
Performance will be better than general purpose EBS in most cases
I’m trying to wrap my head around this…so I’m very interested in hosting MongoDB through Fly, but with the one-volume-per-VM aspect, for a multi-region deployment I would either need to:
Set up multiple DB instances to comprise a single replica set (is that the right term?)
Create just a single separate DB instance in region-of-choice and point all app VMs from any region to it (slower but simpler)
Also, I tried to use ScaleGrid + DigitalOcean a while back and the SSL story was a pain. MongoDB Atlas fared much better. I wonder how it might work through Fly?
The architecture will be more driven by MongoDB than Fly. MongoDB basically demands a single production database is a replica set (three nodes) tightly bound and failing over for each other (It’s not advisable to have replica sets spread out widely). Anyway, each node of the replica set would have its own volume. And that would make a functioning database. Above that, you could create similar configurations in different locations and then cluster them together. But you might want to just consider effective query caching in your application, rather than dealing with the complexity of MongoDB clusters.
Just to clarify … you can only mount one volume per VM instance. If you add 3 volumes with the same name, you can use flyctl scale set min=3 and get a cluster of VMs each with their own volume.
Yeah, I would probably be well served by caching the data with Redis for fast reads. Writing would be less frequent and less demanding of performance. Thanks @Codepope for the tip!
Our initial volumes release is designed for clusters, primarily. It’s worth it to run 2 VMs in general so we’re trying to build things that work well with 2+ VMs.
How does a failover region work with this? For example I have a fail over region for my Sunnyvale instance for LAX and Seattle. If I only have a volume in LAX, will the failover automatically go to LA?
Yep! Volumes constrain where VMs can run. Our scaling UX isn’t quite right for this kind of workload, so it’s a little confusing, but VMs will only launch where you’ve created volumes.