I have an Astro website (SSR node.js adapter) running private ipv6 with a Varnish proxy on top of it. Everything works fine except handling clean/friendly URLs for static pages (/about.html → /about), serving 404 pages, etc.
Do I need Nginx or can Varnish be used for the same purpose?
If so, should I have a total of 3 apps running (Node.js, Varnish, Nginx) or create Dockerfile for both Varnish and Nginx on different ports? What is the best way to scale it (multiple edge regions)?
If we don’t use the Varnish proxy, Fly itself acts as a web server serving the content as intended.
I don’t know anything about varnish so I might be missing something, let me now.
My suggestion would be shipping nginx or caddy for pretty URLs inside your main app so it will be easier to scale it. If possible I’d even say ship varnish together in one app.
Why? Less running VMs means you pay less. Plus our proxy is very good at stopping machines when they’re not in use, adding more proxy layers might make things trickier, specially with multiple regions.
You’d need to tweak you Dockerfile.
Hopefully other folks can contribute with more ideas!
Yes, it can be a great solution. Locally I have docker-compose which handles each service separately. I’m not sure how to combine them into single image. We can create a custom image based on our needs, but is it ideal solution? Perhaps using a supervisor?
In fact, referring to Fly.io’s built-in web server capabilities [[statics]], which acts as a proxy to serve static resources, so I don’t need Nginx in my case, but unfortunately only if the app is exposed to the Internet, as I understand it.
But since it’s running on a private network that we communicate with through Varnish Proxy, it doesn’t work anymore. So I need to run Nginx on top of a node server on a private network with Varnish for public access.
In that case I’d make a custom docker image and adding the other things needed. You can change the entry point to a custom shell script. You can user a supervisor if you want too or a simple
nginx something & node your-app.js for example.
dockerfile-rails will configure nginx if you pass it a
--nginx flag. You can see an example of the output here: https://github.com/fly-apps/dockerfile-rails/blob/main/test/results/nginx/Dockerfile
For Rails, ruby-foreman is used, action cable is set up, and we can infer where the document root is. Also the access and error logs are redirected to stdout.
We could do most of this for node. Perhaps default to node-foreman, and require the nginx root to be passed as an option. This will make configuring nginx for node applications as easy as:
npx dockerfile --nginx-root=build
Thanks! I think this is what I need.
I’ll start looking into that tomorrow. It shouldn’t be difficult given that I already have working code for Rails.
Maybe I’ll just use a debian/ubuntu image with a supervisor.
If you want to go that way, this page might help: Running Multiple Processes Inside A Fly.io App · Fly Docs
Meanwhile, I’ve added a
--nginxRoot flag to dockerfile-node. and am curious if this is enough for your needs or if more is needed.
Would it be helpful to add support for you to provide a vcl file?
That’s great! Do you mean to add Varnish support to your image? There is a tutorial for debian. I think I can configure it by myself using this image, very thank you.
I’m willing to add Varnish support as it may help others. My preference is that developer’s first experience with fly.io doesn’t require researching how to configure a number of components, as many aren’t familiar with Dockerfiles and the fly.io runtime.
But if you have what you need, I’ll leave that for another day. Just be aware that there isn’t a systemd by default on fly.io machines, so just launch varnish in your procfile. If my reading is correct, it appears that varnish writes its log files to stdout, so you are set there.
That would be perfect. Thanks again for your help.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.