Want to run your own Redis on Fly? Our latest new example is how to do that using Fly and our in preview disk volumes - your own persistent Redis - check out the repository and leave your feedback on the article and what you’d like to see next!
Can I run Redis instances in multiple regions with volumes? How would I go about configuring that? How do I get the IP of Redis in one Region so that I can configure it for apps in a particular Region? Can I also run a Redis cluster? How would the Redis instances discover their peers? Ideally, I’d like to have a simple way of deploying Redis instances in many regions and configuring apps to always use the closest Redis.
Ok, let’s see
Can I run Redis instances in multiple regions with volumes?
Yes, Create a volume of the same name in each region you want to deploy in and then add those regions to the region pool.
And jumping to the end… “and configuring apps to always use the closest Redis.” - just use the IP address of the app (it only has the one) and incoming connections will always be sent to the nearest available Redis. Or are you wanting particular apps to communicate between each other over Redis?
We’ll have an example of this up shortly. The best configuration we’ve found is Redis replicas in multiple regions, all configured to point at a Redis master server. This lets you write to replicas for caching and write to master for data that needs to propagate to all regions.
Would love to see this example. Sounds live what I’m looking for.
Here’s a work in progress: https://github.com/fly-examples/redis-geo-cache/
It’s meant to replicate from an existing master Redis, which can either be on Fly or running elsewhere.
Looks very promising. Is there a way to make this setup HA? E.g. multiple Redis instances per region with a sentinel? Or maybe a master and multiple read replicas per region? And then maybe a centralised master to propagate changes across all regions? I guess my setup would be a region local cache mostly for reads and some percentage writes plus a centralised master to propagate deletes to all regions.
The answer is “almost” and “yes”.
We can’t really do a sentinel setup yet, stay tuned until next week.
A single master and multiple replicas will work great, though. This regional example I used is actually HA, even with only one instance per region. If an instance fails, our load balancer will just connect you to the next nearest. So latency will be worse but things will keep working.
For a caching cluster, one instance per region is best for cache ratio. For read replicas, multiple instances per region will work fine.
In the example I posted, region placement and redundancy is controlled by volumes
. You can create 2 volumes per region for that kind of redundancy.
This is exactly what the example is meant for. It’s resilient to failures, if a regional cache fails we redirect connections to the next closest one. If the master fails, the replicas will continue to function until it comes back online.
Hi,
I’m trying to use the example repo, but I get the following error:
2021-07-16T15:45:40.835489380Z app[1c5c8432] syd [info] Mounting /dev/vdc at /data
2021-07-16T15:45:40.836643106Z app[1c5c8432] syd [info] directory /data already exists
At first I thought it was because I launched the app instead of running init, so I destroyed it all and started again, but this time, I created the volumes before I ran deploy and I still see the same issue.
Hey @kurt, has anything changed in the redis world and sentinel support on fly? Was just doing some digging to better handle datacenter / regional outages during migrations for our redis apps that are only deployed in a single region.
Thanks!