Where are build releases run?

I don’t know why, but when I run the deploy command locally, it works. Which shouldn’t be the case if the build and release steps are running on the fly.io servers.

fly deploy --config ./apps/web/fly.toml
==> Verifying app config
Validating ./apps/web/fly.toml
✓ Configuration is valid
--> Verified app config
==> Building image
==> Building image with Depot
--> build:
[+] Building 4.6s (11/11) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                  0.5s
 => => transferring dockerfile: 660B                                                                                                                                  0.5s
 => [internal] load metadata for docker.io/imbios/bun-node:1.1.13-20.12.2-slim                                                                                        0.3s
 => [internal] load .dockerignore                                                                                                                                     0.5s
 => => transferring context: 227B                                                                                                                                     0.5s
 => [1/6] FROM docker.io/imbios/bun-node:1.1.13-20.12.2-slim@sha256:e64f0340effb4714004ed61d47b51dc77825bdf76d94b074e3e07f62b3be7dae                                  0.0s
 => => resolve docker.io/imbios/bun-node:1.1.13-20.12.2-slim@sha256:e64f0340effb4714004ed61d47b51dc77825bdf76d94b074e3e07f62b3be7dae                                  0.0s
 => [internal] load build context                                                                                                                                     0.6s
 => => transferring context: 35.51kB                                                                                                                                  0.6s
 => CACHED [2/6] WORKDIR /usr/src/app                                                                                                                                 0.0s
 => CACHED [3/6] COPY / .                                                                                                                                             0.0s
 => CACHED [4/6] RUN cd ../../                                                                                                                                        0.0s
 => CACHED [5/6] RUN bun install                                                                                                                                      0.0s
 => CACHED [6/6] RUN bunx turbo run --filter @diet-it/web build                                                                                                       0.0s
 => exporting to image                                                                                                                                                2.5s
 => => exporting layers                                                                                                                                               0.0s
 => => exporting manifest sha256:d167649560873dda226f9f7cef1e5afcae94fd8642c853d04bb56b87c61a337c                                                                     0.0s
 => => exporting config sha256:b5e3de63efb78939a4cf3c4a70f4e5523af191ab5040d59530cc96af1ada8680                                                                       0.0s
 => => pushing layers for registry.fly.io/diet-it-web:deployment-01JBAQC297T5QRD2P7QECTJRG1@sha256:d167649560873dda226f9f7cef1e5afcae94fd8642c853d04bb56b87c61a337c   2.1s
 => => pushing layer sha256:18f00d360d58b52e40a61c9983cdc4cfe0f6d3d15eefa3a3b627849d8e7ffe35                                                                          2.1s
 => => pushing layer sha256:936b1523004f68aa8c284c6fed44e9a9f4c013b1910b2eba7d28029527e71b73                                                                          2.0s
 => => pushing layer sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d                                                                          1.2s
 => => pushing layer sha256:6e514f759dea39340201e648f786abacdb92179b5e28d8c0d842687f7edfd2b5                                                                          0.5s
 => => pushing layer sha256:db707203c6fe488c042948f1f18d0282cd299c40cbfbdbb06f691181de775f79                                                                          1.5s
 => => pushing layer sha256:b5e3de63efb78939a4cf3c4a70f4e5523af191ab5040d59530cc96af1ada8680                                                                          0.4s
 => => pushing layer sha256:997104c7e95f3b62ea95d19b28c7c67f0f6e8b8c9cf53c97cb21ec4154092e51                                                                          1.3s
 => => pushing layer sha256:73ec2283050feb3c59759ea8d1b70f9e339a12b23e359382643a64ee2effdeb8                                                                          0.7s
 => => pushing layer sha256:422f515599f9ae1a6f80b8bd7effb7bb596e7432c295c21d966f83eeb720051b                                                                          1.8s
 => => pushing layer sha256:cd7eacf7d49eaa9eee427e807cad22c17e9973a43b62e8fb7f4627dbb356ebf5                                                                          1.6s
 => => pushing layer sha256:f7b75fe1f735933f47315080637abf01f87962d47f8636a07ff4535ed7a4a133                                                                          1.1s
 => => pushing layer sha256:52dc75b9385fedede91fae651f50f092aae283b6b7472dcd0b09a1a3f026f788                                                                          0.2s
 => => pushing layer sha256:7cd129a3edeb6e40e2773477b9437d6516f52b0ca8934db1829ce9e27936a971                                                                          2.1s
 => => pushing layer sha256:1d9285bbfb878216684bf3133f1f994ce3f30dc9a1083afaf9b19fc8e06a0f84                                                                          0.9s
 => => pushing layer sha256:65c6806d21961b82efb26b8018676b8f86b1ee5c025e6af0dd25b56b20e12a0f                                                                          2.1s
 => => pushing manifest for registry.fly.io/diet-it-web:deployment-01JBAQC297T5QRD2P7QECTJRG1@sha256:d167649560873dda226f9f7cef1e5afcae94fd8642c853d04bb56b87c61a337  0.4s
--> Build Summary:
--> Building image done
image: registry.fly.io/diet-it-web:deployment-01JBAQC297T5QRD2P7QECTJRG1
image size: 448 MB
Watch your deployment at https://fly.io/apps/diet-it-web/monitoring
Running diet-it-web release_command: bunx turbo run --filter @diet-it/db db:deploy
-------
 ✔ release_command 17815723a9de98 completed successfully
-------
-------
Updating existing machines in 'diet-it-web' with rolling strategy
-------
 ✔ [1/2] Cleared lease for e7843757c23d58
 ✔ [2/2] Cleared lease for 568307e6c74268
-------
Checking DNS configuration for diet-it-web.fly.dev
Visit your newly deployed app at https://diet-it-web.fly.dev/

Here’s a snapshot of my release_command machine logs:
https://fly-metrics.net/dashboard/snapshot/ciCykJJvaraY8m9QhboVS0IzO1rCK6L7
I even rerun it with --remote-only to be sure it was running remotely and the results were identical.

Something I notice is that you can see both the successful log and the failed one from GitHub in sequence on this snapshot I took shortly after the first one:
https://fly-metrics.net/dashboard/snapshot/9DGNbmfGly1O5p6SMUVteN5Ww6bwrv6H

It looks like you are close!

Can you describe your github action? It should run fly deploy and not worry about release machines as those are done as a part of the deploy.

This is my github action for the web. The backend one is almost the same. Just changes the relevant arguments:

# ./.github/workflows/fly-deploy-web.yml
name: Fly Deploy
on:
  push:
    branches:
      - main
jobs:
  deploy:
    name: Deploy app
    runs-on: ubuntu-latest
    concurrency: deploy-group    # optional: ensure only one action runs at a time
    steps:
      - uses: actions/checkout@v4
      - uses: superfly/flyctl-actions/setup-flyctl@master
      - run: flyctl deploy --config ./apps/web/fly.toml
        env:
          FLY_API_TOKEN: ${{ secrets.FLY_TOKEN_WEB }}

I don’t know why, but any args I added to this command flyctl deploy --config ./apps/web/fly.toml resulted in the github action exiting early and giving a false positive. In example --build-secret DATABASE_URL=sdgojkopsdfogka
This is the related Dockerfile

# Set Bun and Node version
ARG BUN_VERSION=1.1.13
ARG NODE_VERSION=20.12.2
FROM imbios/bun-node:${BUN_VERSION}-${NODE_VERSION}-slim

# Set production environment
ENV NODE_ENV="production"
ARG DATABASE_URL
ENV DATABASE_URL=${DATABASE_URL} 

# Bun app lives here
WORKDIR /usr/src/app

# Copy app files to app directory
COPY / .

# CD into the root directory
RUN cd ../../

# Install node modules
RUN bun install

# Build next js
RUN bunx turbo run --filter @diet-it/web build

# Start the server by default, this can be overwritten at runtime
EXPOSE 3000
CMD [ "bunx", "turbo", "run", "--filter", "@diet-it/web", "start" ]

I have it working locally.
And this is my fly.toml file


app = 'diet-it-web'
primary_region = 'gru'

[build]
  dockerfile = './Dockerfile'

[deploy]
  release_command = "bunx turbo run --filter @diet-it/db db:deploy"

[http_service]
  internal_port = 3000
  force_https = true
  auto_stop_machines = 'suspend'
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']
  
[[vm]]
  memory = '1gb'
  cpu_kind = 'shared'
  cpus = 1

Do you see anything odd?

This looks odd.

Yea, if you call the Dockerbuild on the app path, it would error. But fly let’s me set the Dockerfile context to the monorepo root. If this was at fault, the error would happen during bun install since it would not be able to locate my custom packages.

It still looks odd and is probably wrong. Even if it’s not, you’ve disrupted the agnostic nature of the Dockerfile by doing a quirk from Fly.

You can still do --dockerfile what/ever/Dockerfile, but your Dockerfile itself should be agnostic of anything. Just reading it is confusing:

# Set CWD
WORKDIR /usr/src/app

# Copy your mono repo, so it looks like: `/usr/src/app/packages/some_lib` and `/usr/src/app/apps/web`
COPY / .

# Now you traverse 2 levels up, now  you're at `/usr`, then you run `bun install` there
RUN cd ../../

This is the normal way of writhing Dockerfiles on monorepos with multiple deployments.
The path ../../ is the root of my monorepo. You can’t mess with the outsides of the Docker context.
On normal projects the Docker context is the same as the Dockefile path, but once you grow a little complexity, monorepos come in hand, and the Docker context is adjustable exactly because of situations like monorepos.
It’s not a quirk from Fly, it’s a feature well documented from Docker itself.

I mean even this looks odd:
COPY / .

You are correct, this project specifically cd ../../ does nothing as it’s run after all the copies, I messed up using both the absolute path and the cd steps.
Usually I use one of the two:

# CD into the root directory
RUN cd ../../

# Copy app files to app directory
COPY . .

# OR

# Copy app files to app directory
COPY / .

Monorepo commands like bun install and turbo ... can be run in any directory within the monorepo with the same result.
Since the Docker context is sett correctly, you can’t change directory to outside the project. I just deleted the line and tested, the outcome is the same.
If I do it on the “normal” way:

ENV NODE_ENV="production"
ENV DATABASE_URL=${DATABASE_URL}

# Bun app lives here
WORKDIR /usr/src/app

# Copy app files to app directory
COPY . .

# Install node modules
RUN bun install

It would error as the Docker container would not have access to my custom packages. In example @db, @auth, @config, etc.

You need to run the fly deploy from the root of your monorepo, then the build should work.

I am running the fly deploy from the root of my app.
As you can see on the github action: - run: flyctl deploy --config ./apps/web/fly.toml
The context is ./ (root), the fly.toml is referenced, and the Dockerfile is referenced inside the fly.toml.
Everything related to the Docker image build is run successfully, and it only breaks once it starts the “release process” from the release_command spec.
Am I mistaken and/or missing anything?

I just edited the github action to make the docker context explicit:

- run: flyctl deploy --config ./apps/backend/fly.toml
# becomes
- run: flyctl deploy . --config ./apps/backend/fly.toml

Unfortunately the outcome is exactly the same. The release process ephemeral machine finish with an error after prisma was not able to find the DATABASE_URL environment variable.

Where do you put your dockerfile? Usually it’s one for each app

They are on each deployable app root:

my-monorepo-root
|---apps
|     |---web
|     |     |---Dockerfile
|     |     |---...
|     |---backend
|           |---Dockerfile
|           |---...
|---packages
...     

Every RUN command starts with the previous WORKDIR. So this command will (temporarily) set the current working directory for the duration of the RUN command, and then the next RUN command will start once again with the previous WORKDIR.

While fly deploys and runs container images different than Docker does, the build process is exactly the same (in fact, you can use Docker’s own build locally if you want).

But back to the github action failing: if you are running the exact same command on your workstation as you specified if you github action, running it from the root of your monorepository, using the same version of flyctl, and with the same login, then there should be no difference.

The next thing I would check is to see is the contents of your .gitignore. There may be a files present on your workstation that are not available to your github action.

You are correct, I forgot to .docker-ignore a .env file that is present locally, but is not on my git repository. Once I added it, the local deployment started failing again.
I think if the problem is that dotenv is not reading the env varaible correctly from the system, it’s solvable if I manually create the .env file somehow.

you can store your .env file as a secret on GH actions. Then echo it into the /apps/web/.env. Just be aware of any security concerns around that.

Cool.

dotenv sets process.env from the contents of the file. fly secret does the same thing. You may not need to create a .env file, just set a handful of secrets.

Kudos for pushing through this. It can be hard to get an application working for the first time in an unfamiliar environment. Once you have it working, experimenting with a working system is easier.

I have my fly secrets set up correctly. DATABASE_URL is present there and at run time, it’s accessible by prisma. The releases do work if I manually connect to the db, apply the migrations through a public connection and deploy skipping the db:generate command.
The only explanation left in my opinion is that fly is not setting the env variables correctly.

Fty works when run from your workstation but not when run from a github action. What is different?