There isn’t an all in one package for this yet, I think it would need to be combined with a caching nginx proxy to be good, and also include some basic rules for rewriting URLs and applying defaults.
That would make a great example project, actually.
Wanted to bump this to see if there is a “best” practice for getting something like his launched being that it has been a while since this was last talked about here.
I also noticed there is no longer a link on the docs site to an example that was somewhat related to this topic (going off memory that this existed at some point).
Would like the dynamic image compression / resizing and a caching layer.
We took out the docs link because a lot of these examples are outdated. We want to revive them, one by one, as official examples with an accompanying flyctl launcher. You would still want something like nginx or varnish for the cache layer. That makes it a bit harder to get going, but not much!
If you’re willing to poke around a bit, I can get a quick repo setup for testing.
add all regions into the list and switch the scaling strategy to balanced. This should automatically move the instances to wherever they’re required - Scaling and Autoscaling
Put a ‘real’ CDN like Bunny.net or Cloudfront in front. Fly has 20 execution regions while Bunny has 70+ and Cloudfront 120+ caching regions, so if you’re looking at caching performance a real CDN can’t be beat.
If your URLs are immutable (you don’t change the image at each filename) you can set your Cache Control headers to be 1 year and immutable as well.
Remember to turn on query parameter level caching at the CDN options, if you’re using them in imgproxy.
The CDN settings will also allow Brotli compression, can turn that on.
Bunny.net also have builtin image resizing, so can look at that as well if you don’t need much control.
Are you using anything like nginx / varnish to locally cache?
I don’t see the point of that, the CDNs all have some kind of thundering herd protection, all called origin shield or something, so even if 1000 people request the same image simultaneously the CDN will only send one request and serve the same image to all 1000. I don’t see any need to have a CDN and a local cache.
But a cache might be necessary on the other side of things — if you get a request for a large number of variations of the same image often — at that point you don’t want to load up the high quality large originals N times, so you might want to put an nginx/varnish over the original image source if that’s a problem you have. I sometimes put a CDN over the original image source as well, like a raw.cdn.something.com for the originals. That way even if I need the many variations of the same image quickly the imgproxy loads it off the closest edge cache. This bill tends to be many orders of magnitude lower than the main outgoing CDN that serves users, so it’s usually negligible.