Multiple applications with subdomains behind 1 IP

Hi
We’re new to fly.io and we have three web-applications we’d like to host on fly.
Those are three rails-apps share Postgres, Redis, Memcache, since they all act on the same data. Let’s call them backoffice, selfcare and web.

All of those are in the same domain and each use a separate subdomain:
backoffice.rails-app.com, selfcare.rails-app.com and web.rails-app.com
Therefore we built a single docker-image with an endpoint-script capable to start each of those apps.

At our present hoster, which is VM-based, we use an nginx-reverse-proxy, which serves the public files and makes the tls through “let’s encrypt”.

We are now reflecting about the new architecture on fly.io.
Two scenarios make sense for us:
Scenario 1:
We have one app with three processes (which will start their own virtual machines)
Scenario 2
We have three apps, one for each application

Since we have only one docker-image, we would prefer scenario 1.

The difficultiy lies in the service section. As I understand, we can have one “http_service-section”, but multiple “services” sections.
So we’d need to define three services

[processes]
  app = '/rails/backoffice/bin/rails server -p 3000'
  web = '/rails/web/bin/rails server -p 4000'
  selfcare = '/rails/selfcare/bin/rails server -p 5000'

# backoffice
[[services]]
  internal_port = 3000
  protocol = "tcp"
  processes = ['app']

# web
[[services]]
  internal_port = 4000
  protocol = "tcp"
  processes = ['web']

# selfcare
[[services]]
  internal_port = 5000
  protocol = "tcp"
  processes = ['selfcare']

The question is now, is the ingress capable to route the traffic to our three different rails-apps, based on the given subdomains, and how ?

Further we would need to give the statics-section a hint of which app has which static-path

[[statics]]
  guest_path = '/rails/backoffice/public'
  url_prefix = '/'
  processes = ['app']
  • Is there a possibility we can handle these scenarios through the fly-ingress (or loadbalancer?) or do we need an additional app with our own nginx-reverse-proxy?

  • And if we provide an nginx-app: how can we route to e.g. the selfcare-process or selfcare-app as an upstream, if we scale those vm’s? Can we adress it as selfcare.internal and will this be loadbalanced again?

  • Can we add multiple certificates to one app?

Thanks a lot in advance for any suggestions or hints

I would suggest something a little different.

Fly config allows for you to specify the location of the docker file.

If you used a mono repo setup, where at root you can maintain a dockerfile for all three apps. The fly config in each subfolder (that represents each app) can point to the dockerfile at root.

[build]
  dockerfile = "../Dockerfile.test"

Processes have some downsides, there are plenty of posts about that. They are great for running sidekiq or something next to a app. But using them for multiple apps, is troubling. I originally stated that direction, and switched to using a monorepo with each being a different app.

I have in my IDE a dedicated CLI tab for each folder and can easily switch between them to get fly commands for that app. This has worked really well for managing these applications.

The subdomain setup would be the same as any app. So you would not need to have a reverse proxy with this pattern. Which would save hardware costs.

Curious how you handle multiple masters. Can all apps write to the database?

Hi
Thanks a lot for your reply!
Actually, we just headed in this same direction. One Dockerfile, multiple apps with multiple fly.toml configurations.
This would also solve the problem with the [[statics]]section. Each app has its own and fly-proxy can handle that.
The point we are still struggling is the single IP for all three subdomains. Each app has its own IP (shared or dedicated). I didn’t see a way yet how to handle this. As long as not all three apps can be forced to the save public IP, we still need a reverse-proxy app like nginx as ingress and dispatcher.

I saw the features about fly-replay, but I don’t think we will reroute each request in the backoffice-app to the others (Dynamic Request Routing · Fly Docs)

Curious how you handle multiple masters. Can all apps write to the database?

Good point. We didn’t think about it really, since we are beginners on fly :wink: . We just connected backoffice-app to the pg-cluster. And yes, all three apps write to the database.
In all apps we connect as host to “pg-database.internal”, which all three apps can connect to.
Does the “master” thing imply something we overlooked?

And as another emerging point is the question, how we upstream from a reverse-proxy to the apps? Since there are at least two machines for e.q the web-app running. In the nginx-upstream section we would direct to web.internal:5000. But who handles the loadbalancing?

i don’t know a solution to 3 apps sharing one IP without using a reverse proxy.

Fly does have networking that allows you to route requests efficiently. Others have done the reverse proxy, I think fly may have some examples in there example repo if you search github.

fly-replay is useful for DB writes. On fly, scaling across regions is VERY easy. My website for example sits in 12 regions in the US, most are scaled to zero, but will spin up as needed when a user visits close to a region.

Writing to the DB, your app would check which region has the master DB, as you may have replicas in your other regions. Then it would use fly replay to send the request to the machine in the region with the master.

Some database technologies allow multimaster, where you can right from any instance, and it was reconcile.

When I used AWS, I almost never built anything multi region until it got to a specific scale. With fly, I can be multiregion day one with little effort.

Fly has the added benefit of moving so much compute to the edge, where other clouds there would be more layers before you got to a compute layer that can do what fly can do. So I trust it for low latency apps, and efficient scaling with its scale to zero.

One downside of using a LARGE dockerfile, is the larger the file the slower the apps spin up. It’s a lot more noticeable on a EC2 or Fargate than fly. But since you are building three apps with the same docker image, just be careful what makes it into the final build per app so you trim it down as small as possible. It will improve your spin up time, allowing you to maintain less machines and reduce costs.

It’s possible!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.