Greetings! Just got started with Fly.io and really love the platform flexibility so far.
One thing missing from many PaaS implementations is the ability to route HTTP requests by path to a set of instances under a single deployment.
For example, in a Rails application you might want to separate your websocket traffic from standard web traffic, or to isolate slow requests from others. Without the overhead of keeping two deployments in sync. Might this be possible with Fly’s router at some stage?
If not, it would be interesting to try implementing this with an intermediate tool like haproxy. To route traffic directly to instances, though, you would need some concept of grouping instances within a deployment, and a way to subscribe to VM scaling events.
Another haproxy approach could be to deploy the same prebuilt image to two applications. Then route traffic to the internal application endpoints exposed by DNS service discovery. This seems like it would reduce - but not remove - the risk of these two applications getting out of sync.
This is a super interesting question. We do have plans for some routing logic that lets you control where requests go (starting with static assets we can cache at the edge). But I hadn’t thought of using rules to pick “sets” of application VMs like that.
It makes a lot of sense, though, especially for Rails / Django. Have you seen a good UX for defining these groups and rules?
The only platform I know of that supports this today is Digital Ocean’s App Platform. Here’s an example from their app spec (with some config redacted for clarity):
A service is similar to the concept of the Fly application, only one level deeper, similar to something like Docker Compose. An app may run any number of independent services. Each service may reserve specific paths, and has its own instance count and size.
They don’t have any visual representation of this in their UI, though they have something similar in their Kubernetes product where node groups are represented as independent instance groups with their own scaling rules and settings. Here you can see two node groups in a single cluster, each with a different scaling configuration.
Oh that’s helpful. We are planning to let you create multiple process types in a single app. We primarily need those so people can run workers/web processes, but they’d work equally well for splitting by request type.
When we get there I’ll take another look at this, I think it’s a pretty useful power.
Is this something that seems far off? I’ve got a few apps that I’d love to move over, but they would need this sort of coordination of deployments in a single group. Path-based routing could still be dealt with another way (putting nginx or haproxy in the middle).
I would call this “second tier priorities”. Which means we might tackle it in a month or two, but it’s not really enough to gamble on! Using haproxy or nginx in the middle seems like something we shouldn’t be inflicting on people though.
I was mixing two issues here I think. Path-based routing isn’t something hard to work around. Running haproxy/varnish is a requirement for some projects and that’s OK. They are simple to run within Fly.
However, deploying multiple components is harder to work around. The workaround suggested here - using a process monitor in a container - feels worse to me than the pain inflicted by an intermediate cache/proxy. One reason is that worker and web workloads are usually quite different and require different memory, scaling and CPU requirements.
That said, for now maybe it’s not terrible to deploy two apps. My concerns there:
If one deployment fails and the other succeeds, the two will out of sync
There are guarantees about the two deployments going out at the same time
I’m open to suggestions on how to solve these two problems while waiting for this functionality to be added.
Bumping this to see if any progress on ‘multple components’ may be on the horizon, before looking into custom orchestration.
As mentioned above, using s6 will be a problem given the different requirements of workers versus web processes. These should not be interfering with each other.
The alternative would be to run two deployments, and ensure that a successful one is rolled back if the second deployment does not succeed. Is anyone already something like this?
@endersonmaia I don’t know if it’s exactly what you’re looking for, but for app v2 you can specify which process groups you want an [http_service] block to apply too.
[http_service]
processes = ["web"] # this service only applies to the web process
internal_port = 8080
force_https = true