Host redis on fly (or some other distributed caching mechanism?)

Right now I’m using the file system for a cache and that works ok. But it’s annoying that every deploy blows that away. I think Redis is the natural tool for caching the kind of data I want to cache. However I’m planning on deploying to multiple regions to get the performance benefits of running close to my users all over the world. If I have a redis cache in a single region, then I lose some of those benefits (right?).

I already have my Postgres DB hosted on Fly and I plan on deploying that to multiple regions as well. Is it possible to host Redis on Fly so I can get multi-region deployments of that cache? Am I thinking about this problem the right way? Is there a better solution? Please make no assumptions that I know what I’m talking about. This is definitely not my area of expertise :sweat_smile: Thanks!

1 Like

Yes it is! I’ll work up a multi-region redis cluster example. The neat thing about redis in this case is you can “purge” (or even push cache changes) globally just by writing to the leader instance.

You might actually be better off attaching disks to your Node app though, and then use that for cache. You wouldn’t lose those files between deploys anymore. I have been surprised how nice it is to do all caching within the app process. The downside here is that you have to do extra development to purge cache entries across all your instances.

Right now to purge the cache I just add ?bust-cache=true to the URL for the resource I want to purge from my FS-based cache :see_no_evil: So I’d be happier to have an easier way to do that. The global purge sounds especially nice.

To be more specific, the thing I’m caching is the compiled version of some of my dynamic pages that are loaded from my “CMS” (GitHub :sweat_smile:) and compiled at runtime. This is not fast, so I need to cache that. Eventually I plan to fix up my GitHub action to not bother re-deploying the whole app when I change content and instead just purge the cache for that page.

So whatever makes it easier for me to purge the cache for a particular page is :+1: for me :slight_smile:

For now, I’m just deploying on every change (content or code) and priming my cache before deploying (see my fly.toml and package.json) so none of the users experience a cache miss.

Interestingly, that didn’t appear to run :face_with_monocle:

In any case, this isn’t the solution I want long term anyway.

Ok I got a little behind, Redis example coming tomorrow!

1 Like

I’ll go ahead and deploy something following this example: Redis - standalone Redis Server · Fly

I haven’t yet gone multi-region deployment set up for my site and database. Hoping to get that done in a few weeks though. But I think the example should get me unblocked for now :slight_smile: Thanks!

I got that redis example deployed. I noticed that it’s an insecure connection and there is a PR to show how to make it secure. I’ll probably just wait until your redis cluster example and go through that anyway. For now, it’s not a big deal for me because I’m not caching user data (yet), but I will eventually so that’ll be useful.

Anyway, thanks a bunch for the help with this so far. Having this cache in place and deployed to fly is making my site fly!

If you’re interested, I can help you setup a varnish-based HTTP cache in front of your app. This repo builds a prefab docker image you can just deploy. You set a few env vars and point DNS to it.

It requires Redis to track purge requests, but not in a globally scaled setup. I think global Redis might be overkill if your main goal is to cache HTML.

Thanks for the offer! Would that cache the entire page HTML? I can’t do that very effectively because users can be logged in. An I misunderstanding what it is?

Yes, this would be a standard, full page HTTP cache, generally for public content like blog posts.

For caching in Redis: if your cached data is specific to each user, it might make sense to keep independent cache nodes (without replication) in each region, since users are unlikely to change regions often. This is also simpler to setup than a replicating cluster! Even without user-specific cache, this setup would work. You’d just get a few extra cache misses as each region fills its cache for a particular key.

That said, it’s worth mentioning alternative approaches for caching HTML externally while maintaining dynamic page loads. Hotwire allows you split up your page into chunks and cache them independently as you see fit. Hotwire is framework agnostic, but there are many adapters out there like this one for Express. This would allow you to move your cache to Varnish or something similar.

If you’d still want a central cache with replicated nodes, you could look at how we have implemented distributed purging for Varnish. It uses Redis streams to avoid caches getting out of sync.

Ah! That’s a great point. I really have two kinds of data that I need to cache. Some of it is user specific and some of it is general/public. Because of the public data, I think it would be easiest to just have a single solution which is a distributed redis cluster which can be purged easily by purging the leader as @kurt suggested. I could also have individual redis caches in each region for user-specific data, and maybe in the future I’ll add that and just have two different caches that I use depending on the situation, but for now I’ll go with the simpler approach. Thanks!