Autoscaling with volume forking?

Is it possible to configure fly.io systems to also fork volumes when performing scale up operations? (Instead of starting with empty volumes)

I know I could fork GitHub - superfly/fly-autoscaler: A metrics-based autoscaler for Fly.io and add it, but I figured I’d ask.

When you have multiple machines, the traffic will get round robin, how will you handle the synchronization of your volume data?

This is for CouchDB, I’m already handling application-level replication. And CouchDB is “crash-only”, so it’s safe to fork the volume at any time, as long as the snapshot is atomic.

I’m looking to integrate volume forking into the deploy process so that new instances are warm instead of empty.

Ah, interesting… What happens when you have 2 machines that have data that isn’t quite in parity. How would fly know which of the 2 volumes to fork? Your application would too be out of sync.

I have something similar w/ Turso. Instead of forking the volume (which introduces a lot of complexity and headache.) My app connect to the remote instance and sync in the background. Once the sync is complete, I swap the connection to the local replica.

CouchDB is multi-writer with MVCC and rich replication options (including intermittent and asynchronous).

So it’s ok to just pick the closest volume to fork, since the application-level replication will perform any catch up needed.

Sounds cool, just don’t let JD Vance near it.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.