Preview: persistent disks for Fly apps

We’re releasing persistent disks soon, you can try them out now if you’re curious. flyctl 0.0.145-beta-3 introduces a volumes command you can use to provision permanent disks for your applications.

Volumes are persistent, maintain their data between deploys, and even stick around if your app is suspended.

Creating a volume is simple, this creates a 25GB volume in ewr: flyctl volumes create <name> --region ewr --size 25

To mount this volume at /data, add this to your app config:

[[mounts]]
  source = "data"
  destination = "/data"

You can create multiple volumes with the same name, if you want to run an app in 3 regions, run these commands (and keep [[mounts]] in your config the same):

flyctl volumes create data --region ewr --size 25
flyctl volumes create data --region cdg --size 25
flyctl volumes create data --region syd --size 25

Some limitations:

  • You can only mount one volume in a VM (we’re curious if you need more)
  • We don’t have snapshots/backups available yet
  • Your app can only scale to as many VMs as you have volumes. If you’ve created 3 volumes named “data”, your app can scale to 3 VMs.

Other things to know:

  • The price is $0.15 per GB per month
  • Performance will be better than general purpose EBS in most cases
7 Likes

So exciting!

If there are multiple VM’s in a region due to auto scaling, do they access the same volume?

  • Your app can only scale to as many VMs as you have volumes. If you’ve created 3 volumes named “data”, your app can scale to 3 VMs.

This means “no” right? You would need to vertically scale an app, right?

You can either scale vertically or add more volumes to scale horizontally.

1 Like

I’m trying to wrap my head around this…so I’m very interested in hosting MongoDB through Fly, but with the one-volume-per-VM aspect, for a multi-region deployment I would either need to:

  1. Set up multiple DB instances to comprise a single replica set (is that the right term?)
  2. Create just a single separate DB instance in region-of-choice and point all app VMs from any region to it (slower but simpler)

Also, I tried to use ScaleGrid + DigitalOcean a while back and the SSL story was a pain. MongoDB Atlas fared much better. I wonder how it might work through Fly?

The architecture will be more driven by MongoDB than Fly. MongoDB basically demands a single production database is a replica set (three nodes) tightly bound and failing over for each other (It’s not advisable to have replica sets spread out widely). Anyway, each node of the replica set would have its own volume. And that would make a functioning database. Above that, you could create similar configurations in different locations and then cluster them together. But you might want to just consider effective query caching in your application, rather than dealing with the complexity of MongoDB clusters.

1 Like

Just to clarify … you can only mount one volume per VM instance. If you add 3 volumes with the same name, you can use flyctl scale set min=3 and get a cluster of VMs each with their own volume.

Yeah, I would probably be well served by caching the data with Redis for fast reads. Writing would be less frequent and less demanding of performance. Thanks @Codepope for the tip!

I’m going to try this out soon for a little disk based cache I’ve been playing around with.

Are the deployments still zero downtime? How does that work?

Can I vertically scale with zero down time?

If you add one volume, deployments will cause downtime while the VM is rebuild and reboots.

If you add two volumes, and set flyctl scale set min=2, deployments won’t have downtime. We do rolling upgrades on those.

Right now, vertical scaling requires a full deploy. You also can’t (currently) resize volumes.

Right now, vertical scaling requires a full deploy. You also can’t (currently) resize volumes.

That’s a bummer, but I guess it’s about time I built a piece of backup - infra that I can point my cname to during times like those.

Do you have ideas how long it takes to upgrade from a micro-2x to a cpu1mem1? Or should I just test it myself?

It should be quick! Like less than 2s.

Our initial volumes release is designed for clusters, primarily. It’s worth it to run 2 VMs in general so we’re trying to build things that work well with 2+ VMs. :smiley:

1 Like

You mean having multiple VM’s in the same data center? Or just multiple VM’s period?

Multiple VMs period. Most people run one per region, and just let traffic move to another region during deploys (or if a VM dies).

1 Like

Is there any way to get a look into how much file storage is used through Fly?

We don’t have that exposed via metrics yet, although we should by the time they’re launched for real!

1 Like

How does a failover region work with this? For example I have a fail over region for my Sunnyvale instance for LAX and Seattle. If I only have a volume in LAX, will the failover automatically go to LA?

Also, quick bug when running:

flyctl volumes show 5bf-----

?[31mError?[0m Field 'Volume' doesn't exist on type 'Queries'

Yep! Volumes constrain where VMs can run. Our scaling UX isn’t quite right for this kind of workload, so it’s a little confusing, but VMs will only launch where you’ve created volumes.

1 Like

Why a minimum of 10 GB for the size of the volume? What if I just want to create a volume with just 1 GB for small applications?

1 Like

Just curious if sharing a volume between apps in the same region would be possible.
(I checked and it’s not currently.)