Running an Image Transformation Service (like imaginary or imgproxy) on Fly

There are multiple image resizing services available now, both as ready-to-deploy packages like and GitHub - h2non/imaginary: Fast, simple, scalable, Docker-ready HTTP microservice for high-level image processing , along with libraries you can use to build your own system like

In this guide we’ll look at how to effectively deploy any of these image resizing services on Fly. We’ll also look at adding an NGINX proxy and caching layer — transforming images is usually compute-intensive work, and we want to cache the results if at all possible. The URLs also tend to be immutable and good candidates for caching, because most services work by encoding the properties of the transformed image into the URL itself, so a different transformation or different image will result in a new URL.

Deploying the transformation service itself is easy using the fly launch command:

fly launch --image darthsim/imgproxy:latest --no-deploy

This creates an app on Fly (let’s call it imgproxy for now), and creates a fly.toml that sets up a public service on port 8080 (which is what imgproxy users by default). If we wanted to run this service directly, we could just leave this as it and run it, on the public URL or set a custom domain. If we want to protect our image service behind a cache, we can go ahead and delete the entire [[services]] section. We’ll refer to it from the NGINX layer using the app.internal internal URL.

The image service will require configuration to tell it where to find the raw original images, and we can set those environment variables up with fly secrets. One trick when dealing with a long list of environment variables is to put them into an .env file (remember to .gitignore it) and run

cat .env | fly secrets import

to import them all at one shot. I’m setting up the proxy against S3 and doing it the hard way, so I’ll do:

fly secrets set IMGPROXY_USE_S3=true  
fly secrets set AWS_REGION=eu-west-1
fly secrets set IMGPROXY_TTL=31536000

IMGPROXY_TTL (or the equivalent for any other service) is important, it’s easier to run an NGINX cluster or external CDN if the service is setting cache headers correctly. We prefer systems where we never overwrite images, so an image URL is always an immutable function of the image itself — so we can cache them indefinitely.

We can then deploy the imaging service with fly deploy. If we want to use the service with an external CDN like CloudFront or Bunny, and retained the [[services]] block, that’s all there is to it. If we want to run our own cache, let’s move on to setting NGINX up.

Our plan here is to set up the an NGINX Cluster very similar to the default, except that we’ll change the source to access imgproxy.internal directly over the internal network. This will allow us to remove the [[services]] block completely for the imgproxy app, running it as a purely internal service. We’ll use the process covered in the global cache cluster guide. Once that’s set up, the only thing left is to update the source URL.

We’ll a resolver to our nginx.conf with

resolver [fdaa::3]:53 valid=5s;

and update the proxy_pass; line to

set $image_servers imgproxy.internal;
proxy_pass http://$image_servers:8080;

This tells NGINX that the the list of servers needs to be fetched from imgproxy.internal — this is a DNS call that returns the IPv6 addresses of all the instances of the app running on Fly. Putting this domain name inside the $image_servers variable forces NGINX to re-resolve each time, which gives us a fresh list after any scaling up or down that happens on the imgproxy app. Setting the resolver explicitly is necessary for NGINX to do this, and we can set the valid time to whatever we want the frequency of updates to be. See DNS for Service Discovery with NGINX and NGINX Plus for more information about configuring NGINX with the resolver and server variable names.