Hi,
Ref the recently announced managed Redis by Upstash (though, it has its trade-offs) or this DIY geo-replicated Redis.
One might choose to drop ngnix reverse-proxy altogether as Fly’s proxy primitives (though limited) are good enough in most cases. Personally, I go long ways just to fit my app within Fly’s primitives to get as close to zero-devops as possible.
I don’t use DBs on Fly, but I hear more db services are coming soon-ish. Otherwise, PlanetScale (MySQL) seems neat and cheap, too. There’s SQLite on LiteFS, if you’re adventurous.
If all of these need to run on the same IP (but on different ports), then one may choose to run them as a Fly multi-process app. As a multi-process Fly app, if one may want to use the same ports and IPs (and handle request routing in-app themselves). If not, one can run them in their own Fly app, each behind a different IP/port.
You’ll have to model this one yourself, I’m afraid. Fly Machine apps is cheapest option but it does have rough edges, though, admittedly it is getting better all the time.
Fly won’t solve all latency issues, but it would solve a bunch of other things that will let you spend time on solving your business problems, I am sure
Are there any other recommendation to ensure this is a performant, yet cost effective setup?
Fly does have a lower TCO (ex) than most Cloud providers (incl scaleway, lightsail, digitalocean, vultr etc), but at the end of the day, it really depends on how you set your Fly apps up.
I don’t run DBs on Fly (I prefer fully managed providers), and so tagging @greg / @charsleysa / @tj1 for their inputs.
From what I recall, env vars one sets in the dockerfile or via the dockerfile take precedence over the env vars set via the flyctl deploy -e
switch and/or those set via fly.toml
. The env vars set by Fly (runtime) secrets
take precedence over all other env vars, no matter how they’re set.