This is for CouchDB, I’m already handling application-level replication. And CouchDB is “crash-only”, so it’s safe to fork the volume at any time, as long as the snapshot is atomic.
I’m looking to integrate volume forking into the deploy process so that new instances are warm instead of empty.
Ah, interesting… What happens when you have 2 machines that have data that isn’t quite in parity. How would fly know which of the 2 volumes to fork? Your application would too be out of sync.
I have something similar w/ Turso. Instead of forking the volume (which introduces a lot of complexity and headache.) My app connect to the remote instance and sync in the background. Once the sync is complete, I swap the connection to the local replica.