fly-replay for every other instance?

Hello, I’d like to be able to scale my app with multiple volumes, but am trying to figure out how to keep them in sync.

Right now I have a single volume in the AMS region, with a kind of syncable data store on it.

I’m able to synchronise local data with this store over HTTP. e.g. to upload locally drafted blog posts from my computer.

I’d like to be able to scale this app in the future with multiple volumes. With this technology I’m using where all stores are equal, I’d like to be able to keep them all in sync.

Ideally, I’d love to be able to set a fly-replay header for every other instance than the one it was sent to. This way a POST request to one instance would be replicated across all of them, keeping each store in sync.

Or is this something better done is userland?

I know this is an unusual setup, but I can’t be persuaded otherwise :slight_smile:

This is super interesting. The hard part, I think, is giving you any useful feedback about what happens with multiple requests.

If it were me, I’d actually relay the HTTP request to all other app instances myself. This will let you catch errors and know when certain members don’t successfully write the POST.

You can do a quick DNS lookup to get the private IPv6 addresses for all running processes, then just send requests to each. I think it’s probably simpler than doing it in our proxy: Private Networking · Fly

1 Like

Oh, that makes sense! Thank you for pointing me in the right direction, I’ll see how far I can get.

Our of curiosity, what does your storage layer look like? Is it possible to synchronize content at that level?

@jsierles Hey! (shameless plug incoming) It’s a key/value store for personal uses/small groups where you can selectively sync at the document level. It’s called Earthstar, and I’m finding it a great fit with Fly.

1 Like

Cool! So I’m just wondering then - why would you need to send multiple HTTP requests if this database will synchronize on its own across peers?

@jsierles Because it won’t synchronise on it’s own: you choose when and how to sync. In this case, I’d like to sync over HTTP whenever this store has anything pushed to it.

Running into a little trouble.

My function for getting the other instances looks like this:

export async function getInstanceURLs(): Promise<string[]> {
  if (process.env.NODE_ENV !== "production") {
    return [];
  }

  const resolver = new promises.Resolver();

  const ipv6s = await resolver.resolve6(`${process.env.FLY_APP_NAME}.internal`);
  
  return ipv6s.map((ip) => `http://[${ip}]`)
}

And this seems to work all right. The problem is that when I pass it on to my own API for fetching, I get errors like these:

FetchError: request to http://[fdaa:0:2d58:a7b:aa3:0:2695:2]/earthstar-api/v1/+xxx.xxx/documents failed, reason: connect ECONNREFUSED fdaa:0:2d58:a7b:aa3:0:2695:2:80

I’ve also tried with https instead. Maybe it’s something really simple I’m overlooking.

1 Like

Just had to add :8080 to the address :slight_smile:

Another solution for this problem could be to use https://goreplay.org/ Goreplay is mainly used for shadowing and traffic mirroring in real world testing scenario’s but is very flexible.

You can for example use it to filter traffic and forward it to different hosts. Example:

# only forward requests being sent to the /api endpoint to a.com and b.com
gor --input-raw :8080 --http-allow-url /api --output-http "http://a.com"  --output-http "http://b.com"

See:

Goreplay also supports a distributed setup by running it an aggregator. You could for example setup a single app instance that aggregates and forwards the requests to all instances.

If your request is not idempotent letting the aggregator send it to all instances including the instance that received it could makes things even. easier. That way you do not need to make any decisions, all you do is forward.

See: Distributed configuration · buger/goreplay Wiki · GitHub

@kurt totally enjoyed this blog post btw, Globally Distributed Postgres · Fly can we have a fly.io shirt with “When Your Only Tool Is A Front-End Proxy, Every Problem Looks Like An HTTP Header” Please?

2 Likes

Goreplay is a great suggestion.

I am a fan of snarky shirts. Next time we do shirts we’ll play with it. :slight_smile:

Great! This will be my nr. 1 feature request, would wear it with pride!