Fly + Docker Compose: “only one service can specify build”

I have a fairly simple SvelteKit + Vapor + PostgreSQL app. I’d hoped that Fly’s recent addition of support for Compose would let me create a nice one-stop deploy for the whole app, building and deploying both the front and back end with a single fly deploy:

  • nginx in one container
  • …serving compiled static assets for front end
  • …and proxying /api/*to a second container that has the back end.

(This topology is fine for my little app; I expect usage is never going to be such that I’ll need to scale up either the front or the back end.)

Alas, when I enable [build.compose]in my fly.tomland try to deploy that, Fly informs me that:

only one service can specify build

Drat! This is a surprising and disappointing limitation. Am I just out of luck doing this one-stop deploy with Fly?


Here’s the relevant portion of my compose.ymlfile, in case it’s illuminating:

services:

  web-server:
    build:
      context: .
      target: web-server
    depends_on:
      - api-server
    ports:
      - '80:80'

  api-server:
    build:
      context: .
      target: api-server
    environment:
      <<: *server_environment
    depends_on:
      - db
    ports:
      - '8080:8080'

  db:
    image: postgres:16-alpine
    volumes:
      - db_data:/var/lib/postgresql/data/pgdata
    environment:
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_USER: vapor_username
      POSTGRES_PASSWORD: vapor_password
      POSTGRES_DB: vapor_database
    ports:
      - '5432:5432'


AFAICT, my options here are:

  1. Don’t use compose with Fly, and instead deploy the front and back end as completely separate Fly apps.
  2. Make Vapor serve the static assets itself.
  3. Deploy the static assets completely separately, on Netlify or something.

Is there some tidier way to do this? Want to deploy both the front and the back end of a web app in one fell swoop seems like it must be a fairly common wish, and I feel like I must be missing something.

Hi… The Docker Compose compatibility layer is still fairly basic, and the fact that only one container can do a build is a known limitation, alas. You don’t have to put everything in separate Machines if you don’t want to, but you would need to build and push at least one of the containers’ images as a separate step.

Alternatively, it’s probably worth looking at Fly.io’s own [[statics]] feature; that might be able to replace your Nginx container entirely. The /api/ subtree, if I’m reading the official docs correctly, would just fall through to your dynamic server, provided it’s not present in the specified static directory hierarchy.

For context, it’s probably worth at least skimming the following official doc, too, even though those older techniques are maybe not the first choice in this case:

https://fly.io/docs/app-guides/multiple-processes/


I would be remiss for not warning about single-Machine Postgres here as well: this is very dangerous on Fly.io’s Machines platform and more or less guarantees permanent data loss in the future, :dragon:. (Or, at very least, a highly stressful manual data-rescue operation.)

It’s best to use Managed Postgres instead, unless this is only throwaway data or you have streaming backups set up, etc.

1 Like

Thank you so much for this thoughtful and thorough answer!

I was not aware of that! It might indeed suit my needs. It would defeat one of the goals of Dockerizing the project, which is to make it possible to set up a local staging environment that’s as close as possible to production…but maybe I need to give up on that, and just accept that a local docker compose up bears only a passing relationship to prod. But it does sound like it could be a good option. I’ll take a look.

(This is my first time looking really seriously at Docker, and tbh I’m surprised at how clumsy it is, and how much it fails to achieve what would seem to me to be primary goals of unifying local / staging / prod environments and preventing hosting vendor lock-in.)

Thanks for the link, that’s a good tip. The Procfile approach at that link makes some kind of sense to me! I might poke at that too.

Huh. Fly’s managed Postgres is a non-starter for this project: it would raise hosting costs by…6x? 8x? and at that point I might as well be putting this on a dedicated Linode or something where I can shed all these headaches of limited config. (I’m sure Fly’s managed Postrgres is a good price for the gold-standard QoS it provides, but we simply don’t need most of what it offers.)

I’m concerned about your comment about permanent data loss, however. This is going to be a tiny database — mere hundreds of rows — and mostly read-only, such that daily backups are more than sufficient. High availability is not a concern; it just needs to usually work. Can you say more about where your comment about “more or less guarantees permanent data loss” comes from? Does Fly…sporadically delete volumes as a form of garbage collection or something?! It sounds from the message you linked to like Fly volumes aren’t resilient across hardware failure…?

1 Like

If you just want a single Machine then that might actually be the better approach, to be honest… Fly.io’s Machines platform is mainly intended for people who want to spread multiple compute nodes across the entire globe and/or do clever things with the Machines API.

The new Sprites concept will probably be a good alternative for the more commonplace single-node setting, once they stabilize a lot more, :sweat_smile:

(It says “sandboxes” in the tag line, but they also intend these to be used for the kinds of small, relatively casual servers that lots and lots people could really benefit from, if the barrier was just dropped a little.)

Exactly. The persistent volume disappears when the underlying hardware dies (which of course it will someday).

[You can search the forum archives for his username to see several specific “bad news” cases… It really takes people by surprise.]

The Machines platform largely assumes that you’ll have ≥2 copies of each volume, with replication handled by you at the app level. (There are halfway comprises that can be made instead, sometimes, but you’re going against the grain then.)

Possibly even 40×, if you have a low-traffic site with auto-stop enabled.

The forum has several fans of Supabase’s free tier, which might also be worth assessing. (I’ve never used it myself, though. I lean toward LiteFS and similar on Fly.io, although I’m a huge fan of Postgres for workhorse local databases…)

The database side has always been a shortcoming of Fly.io, really, although at least now the mid- and high-end tiers are well served…

1 Like

I am surprised at this statement. Fly uses Docker (for building, not runtime), but Docker isn’t Fly specific. Docker is extremely vendor-neutral. What about it did you find clumsy? How did it fail on your goals?

I’m a little concerned by the “at the app level” comment. Unless I’m much mistaken, even the free tier PostgreSQL will make replicas at the DB level when there are multiple machines in the cluster…?

I’m still going to keep nightly backups on some other service, just because I’m paranoid like that…but are those “replica” machines not actually replicating unless I take some extra action?

I don’t want to derail this thread with my general critique of Docker’s configuration and architecture, which would be wildly off-topic for the whole forum, but: this thread right here is a great example of how Docker fails to prevent vendor lock-in.

Yes, Docker is vendor-neutral in what features it provides. It’s the features it fails to provide that leave necessary gaps for vendors to fill. I have a bog-standard app with (1) a statically deployed web front end, (2) an API server, and (3) a PG database. There is no Docker configuration that lets me just drop this app onto any hosting service without vendor-specific finagling. I have a Compose file that describes that and runs it locally — but no, it’s not suitable for production use. Every single question I’ve asked in this thread is basically a feature Docker fails to provide that makes deployment vendor-specific.

1 Like

This sounds like it’s mainly only unfamiliarity with Fly.io’s somewhat idiosyncratic terminology, actually…

Fly.io has no free tier anymore, except for people who signed up before 2024:

There is no “free account/free tier” on Fly.io. We do have a Free Trial program, which you can read about here.

And, apart from Managed Postgres, there is no “DB level” on Fly.io…

People like myself who set up their own distributed databases are doing replication at the app level, according to Fly’s way of thinking about it.

The screenshot looks like Legacy Postgres, which is likewise an app in its own right. (This confuses many people.) Consequently, the WAL shipping it’s doing is indeed app-level replication, and, like john-fly implied, that does stave off data loss on hardware failure. The Fly.io platform itself isn’t replicating the volumes, though, so it wouldn’t reach in and do this for your earlier Postgres container, etc. That’s the main point…

If you put a database on Fly.io, then it’s your responsibility to modify or configure it to do its own multi-Machine replication, failover, and so on.

(Also, I meant Supabase’s free tier rather than Fly.io’s Legacy Postgres. The latter is deprecated; I wouldn’t recommend it for new projects, unless you’re a very experienced Postgres admin. You are expected to manually unwedge the cluster failover, etc., every so often, and eventually users will even need to maintain the image Dockerfile themselves, :rough_sailing:.)

2 Likes

A good shout on keeping the topic focussed. :raising_hands: I’d be most interested in another thread on that topic, were you minded to copy your remarks there! I’m not a mod or an employee, but there is a fair degree of latitude as to what may be posted here, AFAICT.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.