I know there has been conversations in the past about possibly avoid running workers on fly, but I guess I am curious how running a bullmq (GitHub - taskforcesh/bullmq: BullMQ - Premium Message Queue for NodeJS based on Redis) worker process on fly via throng (GitHub - hunterloftis/throng: A simple worker-manager for clustered Node.js apps) would be any different from running a node web server?
That would work just fine! We have basic support for apps with no services. If you create a new app and remove the [[services]]
block from fly.toml
, you’ll basically have a worker app.
What we’re missing is:
- A way to run workers in conjunction with a web app. You can make worker only apps and web only apps, but not both. That irritates me.
- Autoscaling. You have to choose how many you run. Works fine but it’s not my favorite UX. Workers should be “easy” to scale to 0 but they’re not yet.
Are solutions to those limitations on the roadmap?
@kurt Can you clarify if it is a good idea to use fly redis for a queue since it is documented as strictly for cache use-cases only?
We use a few redis servers in our stack that are deployed on Fly, you will not want to use the fly global redis for this, you will want to to deploy your own app for redis and have it attached to your own volume.
Here is a thread that helped get us deployed:
You can now run workers alongside your main app: Preview: multi process apps (get your workers here!)
There’s no autoscaling of workers, however!
We have opted to run workers on their own fly app so we can scale these separately, and run them where it makes the most sense.
Is hosting Redis on fly a good idea though? It seems to be far from zero ops doing that?
You always have the choice to where you host anything, but it actually makes a ton of sense to host redis on fly if you are also hosting the app servers that need to connect to the redis servers on fly.
All our redis servers are not accessible to the outside world and they can only be accessed by the app servers via the fly internal network which is huge from a performance and security standpoint.
Its also super cool if you want to host a local redis server in each region where your app servers also run so there is never a far hop to query the redis server. You can also setup the redis servers to be read replicas if you need a sense of global cache for super performant reads
Redis at small scale is effectively zero ops. I’d like to build first class Redis support into our CLI (like we have with Postgres), but this is simpler to run than most apps: GitHub - fly-apps/redis: Launch a Redis server on Fly
Thanks for the feedback, useful insight!