Path-based HTTP routing to instance groups of a single deployment

Greetings! Just got started with Fly.io and really love the platform flexibility so far.

One thing missing from many PaaS implementations is the ability to route HTTP requests by path to a set of instances under a single deployment.

For example, in a Rails application you might want to separate your websocket traffic from standard web traffic, or to isolate slow requests from others. Without the overhead of keeping two deployments in sync. Might this be possible with Fly’s router at some stage?

If not, it would be interesting to try implementing this with an intermediate tool like haproxy. To route traffic directly to instances, though, you would need some concept of grouping instances within a deployment, and a way to subscribe to VM scaling events.

Another haproxy approach could be to deploy the same prebuilt image to two applications. Then route traffic to the internal application endpoints exposed by DNS service discovery. This seems like it would reduce - but not remove - the risk of these two applications getting out of sync.

Cheers in advance.

Joshua

2 Likes

This is a super interesting question. We do have plans for some routing logic that lets you control where requests go (starting with static assets we can cache at the edge). But I hadn’t thought of using rules to pick “sets” of application VMs like that.

It makes a lot of sense, though, especially for Rails / Django. Have you seen a good UX for defining these groups and rules?

The only platform I know of that supports this today is Digital Ocean’s App Platform. Here’s an example from their app spec (with some config redacted for clarity):

services:
- http_port: 8080
  instance_count: 2
  instance_size_slug: professional-s
  name: rails-actioncable
  routes:
  - path: /cable

A service is similar to the concept of the Fly application, only one level deeper, similar to something like Docker Compose. An app may run any number of independent services. Each service may reserve specific paths, and has its own instance count and size.

They don’t have any visual representation of this in their UI, though they have something similar in their Kubernetes product where node groups are represented as independent instance groups with their own scaling rules and settings. Here you can see two node groups in a single cluster, each with a different scaling configuration.

Oh that’s helpful. We are planning to let you create multiple process types in a single app. We primarily need those so people can run workers/web processes, but they’d work equally well for splitting by request type.

When we get there I’ll take another look at this, I think it’s a pretty useful power.

Great! So right now, workers must run as a separately deployed application?

Yep, it’s a little gross to setup but seems to work pretty well.

Sure, I think it’s fine for many cases. The main risk is the possibility of app code being out of sync.

For workers I am planning to use Foreman as the entry point in my docker image and running the worker in the same VM as my rails app.

3 Likes

That should work fine.

We do want to offer more features for easily running multiple processes / instances as a single logical unit.

Relevant to the discussion around running multiple processes in “docker”

2 Likes

s6 is pretty nice. I’ve been (between work projects) using it to setup Ghost + Litestream: https://ghost-blog.fly.dev