How to access multiple processes database

@iampopal, you’re seeing the first because our proxy just picked it as the closer to you. You fly.toml, from the perspective of our proxy, thinks both processes are the same app so doesn’t matter which it routes to, it should be fine. But knowing your use case, that’s not true.

Managing multiple apps, each for a customer can be challenging indeed, especially with deployments and I totally can see why you’d want to make all machines on the same host so here are some notes about it:

You need a way of routing URLs per customer

Say you have two stores: App Store and People Store. Whenever someone goes to restless-firefly-7869.fly.dev you’'d need to know how to serve each machine. If you are going to provide your customers with custom domains that makes it slightly easier, you could use Fly-Replay

It could be something like this: if your app detects the user comes from app-store.com and the machine is not the correct one, respond with Fly-Replay: instance=correct_machine_id.

But as you can see it requires your app to be changed to do this Fly.io specific thing.

Another way you can handle this is a simple nginx/caddy or any other reverse proxy setup if you’d like but I’d still recommend Fly-Replay because it’s on proxy level so wouldn’t cost you CPU cycles.

Do security concerns your use case?

Both machines will be on the same app which means they share some resources such as environment variables and secrets. If this is something that would be concerning I’d recommend creating two apps on different networks with fly apps create --network any-string-goes-here.

This is getting too complicated!

If you feel like the recommendations above are not a good fit for a simple use case, I would recommend you to try to handle this at application level. You could add some code that just changed the DB connection based on URL or something like that.

Your use case feels like our Web Terminal

Each customer who goes to fly.io/terminal gets a secret app with a secret machine under a private network, that’s how our web terminal works. We had to do a bit of management on our side to handle thousands of machines being created. But in your use case you seem to need two apps for now, maybe a simple enough solution is just making a CI to deploy things on both?

Thank you for your explanation.

Each of our apps will have its own secret and it’s own mounted storage volumes .

Currently, we are making a copy of fly.toml for each user and when we deploy pass
--config flytomlname.tom
This works but we are thinking about what will happen when we have 100 users and how we will be deploying updates for them.

One other work around that i have done is by having multiple ports and when we call each port that port base database will open

Our docker file Expose 2 ports

...
EXPOSE 8080
EXPOSE 9090

# Runing Dart Server
CMD ["/app/bin/server"]

Then our fly.toml file have 2 processes for and 2 services

app = "restless-firefly-7869"
primary_region = "sin"
kill_signal = "SIGINT"
kill_timeout = 5

[processes]
  app = "/app/bin/server --db /app/bin/data.db --port 8080"
  app2 = "/app/bin/server --db /app/bin/data2.db --port 9090"

[experimental]
  allowed_public_ports = []
  auto_rollback = true


[[services]]
  http_checks = []
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    force_https = true
    handlers = ["http"]
    port = 80

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

[[services]]
  http_checks = []
  internal_port = 9090
  processes = ["app2"]
  protocol = "tcp"
  script_checks = []

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    force_https = true
    handlers = ["http"]
    port = 9090

Now when we call https://restless-firefly-7869.fly.dev it works on the default port

but when we call https://restless-firefly-7869.fly.dev:9090 this does not works

I think if port based different service work will also solve our issue, is there a solution for that?

You have at least two machines: one runs port 8080 and other 9090, each process create a machine, you can verify that by going to the dashboard or running fly status. Your fly.toml thinks that all machines will have something hosted on port 8080 and 9090 which is not the case. What could be happening is that the proxy is sending us to the machine that contains app not app2 so we won’t see port 9090.

For your current fly.toml to work you’d need to listen to both ports on all processes or making one process that listen to boths which doesn’t seem like what you’d want to do.

If both your customer apps will live on the same fly app but on different routes you will have to consider one reverse proxying solution to figure out which app to route to or using fly-replay.

1 Like

This is sounding like a good scenario to have one app per customer app. Deploy could be a little tricky but it can be managed a shell process or more complex pipelines in something like Jenkins if needed.

1 Like

I have just read about fly-replay and checked the github repository,

So is fly-replay by default setup in each project and I can set header for each request and request their own machine which is a process

So what is the best method to continue with

Shall we use processes with fly-replay and call each process?

Or

Shall we use an app for each customer and deploy all apps updates with github actions?

Let me verify I understand your use case correctly, feel free to correct me:

  1. You want to run apps for your own customers
  2. Each app will contain a different DB and different envs
  3. You want the deployment to be easy even when 2 customers become 100s

Points 1 and 2 strongly make me think the approach you should go for is 1 app per customer. It will make your life a lot easier in the short and long term in many ways (scaling, DNS, security…).

I understand point 3 might be scary at first but I’d recommend: do something simple at first such as a simple deploy shell script that deploys to all apps via gh actions, when you see that this is becoming harder switch to something more elaborate like a different CI or custom pipeline you’d write.

We do that here on Fly for thousands os apps for our customers and we use the machines API for that so you could say we are our own customers too!

Fly-Replay is awesome and can take you far but I think it adds more homework for you to make it work than just creating another Fly app and will require maintenance to make it work across your cluster as it grows.

Our main points are:

  1. We have an Accounting Software as a SAAS and Currently have 300 customers each of our customer use a server in their own computer and other of their computer connect by local wifi network
    1.1) We currently use local server because in our country internet price is very expensive as some of our customer now can support internet they want to access their database form any where, so we decided to use fly.io for them
    1.2) So the main point is: Each of our customer need to run an app

  2. Each app will contain a different DB and different env variables and different storage

  3. You want the deployment to be easy as currently one of customer is using a fly app and when we build a new version of our app we do come to terminal and run fly deploy --config customer.toml

We have checked fly GitHub actions, they do not have a way of building multiple apps at once with different --config filename.toml

Can you help us prove a some simple code of deploying 3 apps at once using github actions?

You can write a shell script with concurrent deploys and run on github action:

You could also probably deploy multiple apps with matrix, Im not sure about concurrency

1 Like

One other big need that we have is that

Currently, as our app is offline and has a local server, we sell activation key to our users to activate their system. After they activate their system they do not need internet and will access all premium features.

With full online server service provided to our users,
We need an admin app that will have buttons to

  1. Crate a new app
  2. Create and mount storage for new app
  3. Set secrets of new app
  4. Update secrets of old apps

How we can create an admin console for our apps that will have the above features so we can help our users by not touching code again and again and distributing our service to many users?

1 Like

We have an API just for that!

Our CLI pretty much uses it for everything when it comes to machines. Soon it will also use that API for volumes. The only thing left out is secrets which you should do through our graphql API for now:

https://api.fly.io/graphql

If you ever wonder how to do something you can always look at the source code of our CLI:

I am not very good at bash script writing,

Will be greater to that GitHub fly actions: GitHub - superfly/flyctl-actions: :octocat: GitHub Action that wraps the flyctl
have a way of passing them a file which have a list like
[fly1.toml, fly2.toml, fly3.toml]
So we can easily deploy multiple apps just reading them from a folder or a file which have list of fly.toml files names

1 Like

I think that’s something already done if you use GitHub actions matrix. Matrix there is pretty much a loop where you can create variables for each loop and you can use those on the action itself.

1 Like

Will be great Dart language SDK for managing machines so we will just assign SDK and continue working with it.

It will be a big help if you provide a simple of that for us…

Here’s an example from the docs on how to use variables with matrix:

Should be the same just changing from node versions to fly.toml names

Thank you, I am thinking of first write a bash script so that when the bash script successfully runs on my os will also successfully run on GitHub Actions

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.