Hello, I’d like to be able to scale my app with multiple volumes, but am trying to figure out how to keep them in sync.
Right now I have a single volume in the AMS region, with a kind of syncable data store on it.
I’m able to synchronise local data with this store over HTTP. e.g. to upload locally drafted blog posts from my computer.
I’d like to be able to scale this app in the future with multiple volumes. With this technology I’m using where all stores are equal, I’d like to be able to keep them all in sync.
Ideally, I’d love to be able to set a fly-replay header for every other instance than the one it was sent to. This way a POST request to one instance would be replicated across all of them, keeping each store in sync.
Or is this something better done is userland?
I know this is an unusual setup, but I can’t be persuaded otherwise
This is super interesting. The hard part, I think, is giving you any useful feedback about what happens with multiple requests.
If it were me, I’d actually relay the HTTP request to all other app instances myself. This will let you catch errors and know when certain members don’t successfully write the POST.
You can do a quick DNS lookup to get the private IPv6 addresses for all running processes, then just send requests to each. I think it’s probably simpler than doing it in our proxy: Private Networking · Fly
@jsierles Hey! (shameless plug incoming) It’s a key/value store for personal uses/small groups where you can selectively sync at the document level. It’s called Earthstar, and I’m finding it a great fit with Fly.
@jsierles Because it won’t synchronise on it’s own: you choose when and how to sync. In this case, I’d like to sync over HTTP whenever this store has anything pushed to it.
Another solution for this problem could be to use https://goreplay.org/ Goreplay is mainly used for shadowing and traffic mirroring in real world testing scenario’s but is very flexible.
You can for example use it to filter traffic and forward it to different hosts. Example:
# only forward requests being sent to the /api endpoint to a.com and b.com
gor --input-raw :8080 --http-allow-url /api --output-http "http://a.com" --output-http "http://b.com"
Goreplay also supports a distributed setup by running it an aggregator. You could for example setup a single app instance that aggregates and forwards the requests to all instances.
If your request is not idempotent letting the aggregator send it to all instances including the instance that received it could makes things even. easier. That way you do not need to make any decisions, all you do is forward.
@kurt totally enjoyed this blog post btw, Globally Distributed Postgres · Fly can we have a fly.io shirt with “When Your Only Tool Is A Front-End Proxy, Every Problem Looks Like An HTTP Header” Please?