Build images with nixpacks

Our friends over at Railway created a very cool alternative to buildpacks called nixpacks.

We’ve now added flags for building your images with nixpacks directly from flyctl. Available from v0.0.361.

To use it, add the appropriate flag to your commands:

fly deploy --nixpacks

or, for machines:

fly machine run --build-nixpacks .

In our testing, this works pretty well to replace buildpacks in many case. It’s way faster and produces leaner images.

:warning: You will need the docker CLI installed locally even if your build goes through the a remote builder.

There are still some rough edges, but it’s good enough to give it a try!

Technical bits

  • flyctl pulls the latest nixpacks binary in ~/.fly/bin if it doesn’t already exist.
  • if using remote builds, it creates a tunnel to a remote builder and listens on a unix socket in a temporary location
  • everything else is the same, it just uses nixpacks to build your image
13 Likes

Just tried this but got an error. It works without the --nixpacks flag.

Is it because I am using the --remote-only flag?

No that seems like a bug in flyctl!

We probably just need to make sure the directory exists before trying to download nixpacks.

Ok cool, I’ll give it a shot once you’ve released a new version.

It’s fixed in the prerelease now:

curl -L https://fly.io/install.sh | sh -s pre
2 Likes

This is now released proper.

2 Likes

Not a huge difference in CI time, actually slower in this case. It is a simple Nodejs app. Any other benefits I should be looking for?

I have a feeling most of that time was uploading a very-large image from scratch. It looks like the buildpack was cached on your builder, too, so that’s best case for the buildpack.

I’m curious what happens if you run it a second time?

Giant images are a problem with nixpacks. We’re thinking about how to optimize that.

Ran it again. Job took 3m 57s this time around. But that is both the build and deploy times combined, so it is unclear for this discussion about just the build times.

Also wow: image size: 1.9 GB from the logs.

All output:

==> Verifying app config
8
--> Verified app config
9
==> Building image
11
! Bin directory /home/runner/.fly/bin is not in your $PATH
12
> Installing nixpacks, please wait…
14
      +--------------+
15
     /|             /|
16
    / |            / |
17
   *--+-----------*  |
18
   |  |           |  |                Nixpacks is now installed
19
   |  |           |  |             Run `nixpacks help` for commands
20
   |  |           |  |
21
   |  +-----------+--+
22
   | /            | /
23
   |/             |/
24
   *--------------*
26
Waiting for remote builder fly-builder-empty-wildflower-2260...
27
Remote builder fly-builder-empty-wildflower-2260 ready
28
Proxying local port /tmp/995394652/docker.sock to remote [fdaa:0:6ed4:a7b:8a0f:58ad:45bf:2]:2375
29
╔═══════════ Nixpacks v0.2.11 ══════════╗
30
║ Packages   │ nodejs, pnpm-7_x         ║
31
║───────────────────────────────────────║
32
║ Install    │ pnpm i --frozen-lockfile ║
33
║───────────────────────────────────────║
34
║ Build      │ pnpm run build           ║
35
║───────────────────────────────────────║
36
║ Start      │ pnpm run start           ║
37
╚═══════════════════════════════════════╝
40
#2 [internal] load .dockerignore
41
#2 sha256:aab686c6736fdc1b6afe217896c03343cdffbfbae1dd787b225c887bb0afd516
42
#2 transferring context: 2B 0.0s done
43
#2 DONE 0.1s
44
#1 [internal] load build definition from Dockerfile
45
#1 sha256:26de3cb9551b0e5a5ec37fb0a101ff1e4b84c0013a299711f06f7ac4d0c7d295
46
#1 transferring dockerfile: 597B 0.1s done
47
#1 DONE 0.1s
48
#3 [internal] load metadata for ghcr.io/railwayapp/nixpacks:debian-1657820962
49
#3 sha256:354f8716cede4a6a1c938bfef76a0ac4f6256b0f50b45ebdb4783f35187f4b1a
50
#3 DONE 0.5s
54
#4 [stage-0 1/8] FROM ghcr.io/railwayapp/nixpacks:debian-1657820962@sha256:f116f75d26a1ad18c99138f10f3efd6564e88960854a3bbd1f2625396d22fcf9
55
#4 sha256:6b1e7f50cdc2d1a8391e9341249352b1fb6e3c42cfca2cbb97e08e25e6eff7d5
56
#4 DONE 0.0s
57
#6 [internal] load build context
58
#6 sha256:6821e840f266152831d3ac296d6764574ad21f607018e1092e26667536379cc3
59
#6 transferring context: 126.14kB 0.2s done
60
#6 DONE 0.2s
61
#5 [stage-0 2/8] WORKDIR /app/
62
#5 sha256:2bfcb9bf95e1cd3bcfb335c74b0814921891cd9123caf8ffef83c78193a92b5a
63
#5 CACHED
64
#7 [stage-0 3/8] COPY environment.nix /app/
65
#7 sha256:5e6e1557abe8e7cb4f03f14747d85eafff818282e821cdfa95ab0256d07927bc
66
#7 CACHED
67
#8 [stage-0 4/8] RUN nix-env -if environment.nix
68
#8 sha256:c53e750c8adcfb91e7713a1afc6e7ef4f6a570f29292d5690e0fddaa3d26de5a
69
#8 CACHED
70
#9 [stage-0 5/8] COPY . /app/
71
#9 sha256:cc6baf066430ccd648f50bbacb19695b205899a68e75590605675d0dfd2f33d2
72
#9 DONE 0.0s
73
#10 [stage-0 6/8] RUN --mount=type=cache,id=B2re9NjHLWg-/root/cache/pnpm,target=/root/.cache/pnpm pnpm i --frozen-lockfile
74
#10 sha256:c2f1a5662d47e2bf5ebd0ccbdc7e02fde75ab26ba1c10fdc3dac799cbb8d23c8
81
#10 1.275 Lockfile is up-to-date, resolution step is skipped
82
#10 1.299 Progress: resolved 1, reused 0, downloaded 0, added 0
83
#10 1.349 Packages: +218
84
#10 1.349 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
85
#10 2.310 Progress: resolved 218, reused 0, downloaded 0, added 0
86
#10 2.592 Packages are hard linked from the content-addressable store to the virtual store.
87
#10 2.592   Content-addressable store is at: /root/.local/share/pnpm/store/v3
88
#10 2.592   Virtual store is at:             node_modules/.pnpm
89
#10 3.313 Progress: resolved 218, reused 0, downloaded 52, added 48
90
#10 4.317 Progress: resolved 218, reused 0, downloaded 134, added 130
91
#10 5.317 Progress: resolved 218, reused 0, downloaded 213, added 213
92
#10 6.319 Progress: resolved 218, reused 0, downloaded 215, added 215
93
#10 7.320 Progress: resolved 218, reused 0, downloaded 216, added 216
94
#10 8.353 Progress: resolved 218, reused 0, downloaded 218, added 218, done
95
#10 8.541 .../node_modules/@prisma/engines postinstall$ node scripts/postinstall.js
96
#10 8.561 .../nodemon@2.0.19/node_modules/nodemon postinstall$ node bin/postinstall || exit 0
97
#10 8.619 .../nodemon@2.0.19/node_modules/nodemon postinstall: Love nodemon? You can now support the project via the open collective:
98
#10 8.619 .../nodemon@2.0.19/node_modules/nodemon postinstall:  > https://opencollective.com/nodemon/donate
99
#10 8.622 .../nodemon@2.0.19/node_modules/nodemon postinstall: Done
100
#10 10.92 .../node_modules/@prisma/engines postinstall: Done
101
#10 11.45 .../prisma@4.1.1/node_modules/prisma preinstall$ node scripts/preinstall-entry.js
102
#10 11.52 .../prisma@4.1.1/node_modules/prisma preinstall: Done
103
#10 11.53 .../prisma@4.1.1/node_modules/prisma install$ node scripts/install-entry.js
104
#10 11.58 .../prisma@4.1.1/node_modules/prisma install: Done
105
#10 11.70 .../node_modules/@prisma/client postinstall$ node scripts/postinstall.js
106
#10 12.46 .../node_modules/@prisma/client postinstall: Prisma schema loaded from prisma/schema.prisma
107
#10 13.30 .../node_modules/@prisma/client postinstall: ✔ Generated Prisma Client (4.1.1 | library) to ./node_modules/.pnpm/@prisma+client@4.1.1_prisma@4.1.1/node_modules/@prisma/client in 134ms
108
#10 13.30 .../node_modules/@prisma/client postinstall: You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
109
#10 13.30 .../node_modules/@prisma/client postinstall: ```
110
#10 13.30 .../node_modules/@prisma/client postinstall: import { PrismaClient } from '@prisma/client'
111
#10 13.30 .../node_modules/@prisma/client postinstall: const prisma = new PrismaClient()
112
#10 13.30 .../node_modules/@prisma/client postinstall: ```
113
#10 13.38 .../node_modules/@prisma/client postinstall: Done
114
#10 13.52 
115
#10 13.52 dependencies:
116
#10 13.52 + @prisma/client 4.1.1
117
#10 13.52 + dotenv-cli 6.0.0
118
#10 13.52 + fastify 4.3.0
119
#10 13.52 + graphql 16.5.0
120
#10 13.52 + mercurius 10.1.0
121
#10 13.52 + nexus 1.3.0
122
#10 13.52 
123
#10 13.52 devDependencies:
124
#10 13.52 + @swc/core 1.2.222
125
#10 13.52 + @tsconfig/node16 1.0.3
126
#10 13.52 + @types/node 17.0.45
127
#10 13.52 + node-dev 7.4.3
128
#10 13.52 + nodemon 2.0.19
129
#10 13.52 + prisma 4.1.1
130
#10 13.52 + ts-node 10.9.1
131
#10 13.52 + ts-node-dev 2.0.0
132
#10 13.52 + typescript 4.7.4
133
#10 13.52 
134
#10 DONE 13.7s
136
#11 [stage-0 7/8] RUN --mount=type=cache,id=B2re9NjHLWg-node_modules/cache,target=node_modules/.cache pnpm run build
137
#11 sha256:13821e347929ffa9e87d732f664fd2308ca3c978c5cd244ffc434ba8a68f2ba0
138
#11 0.799 
139
#11 0.799 > pdp-spike-fly@1.0.0 build /app
140
#11 0.799 > tsc --project tsconfig.build.json
141
#11 0.799 
142
#11 DONE 2.4s
144
#12 [stage-0 8/8] COPY . /app/
145
#12 sha256:c1339b89c58d3229047cc9d85c8eec60611b688d838c00de7191731283a2347d
146
#12 DONE 0.0s
147
#13 exporting to image
148
#13 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
150
#13 exporting layers
151
#13 exporting layers 5.3s done
152
=== Successfully Built! ===
153
Run:
154
  docker run -it registry.fly.io/pdp-spike-fly:deployment-1659553029
155
#13 writing image sha256:3be1ca796226e09a8c9deaf57c0f3f2e7e5ada50cae31fdff33666569bffe04c done
156
#13 naming to registry.fly.io/pdp-spike-fly:deployment-1659553029 done
157
#13 DONE 5.3s
159
The push refers to repository [registry.fly.io/pdp-spike-fly]
160
2c9d21a0236f: Preparing
161
971742ecec62: Preparing
162
22b20dc49eb1: Preparing
163
01b653a44291: Preparing
164
ac291353f13b: Preparing
165
27eab16116e5: Preparing
166
b996c014a035: Preparing
167
bf4f0759f2ab: Preparing
168
10f49c04fbf0: Preparing
169
a363b4ef65d1: Preparing
170
08249ce7456a: Preparing
171
a363b4ef65d1: Waiting
172
08249ce7456a: Waiting
173
27eab16116e5: Waiting
174
b996c014a035: Waiting
175
bf4f0759f2ab: Waiting
176
10f49c04fbf0: Waiting
177
ac291353f13b: Layer already exists
178
27eab16116e5: Layer already exists
179
b996c014a035: Layer already exists
180
bf4f0759f2ab: Layer already exists
181
10f49c04fbf0: Layer already exists
182
a363b4ef65d1: Layer already exists
183
08249ce7456a: Layer already exists
184
971742ecec62: Pushed
185
01b653a44291: Pushed
186
2c9d21a0236f: Pushed
187
22b20dc49eb1: Pushed
188
deployment-1659553029: digest: sha256:f0a4e54ac03eb413a2124d06bd9793cc10c5b51973c761c2c8ba4f7041115041 size: 2627
189
image: registry.fly.io/pdp-spike-fly:deployment-1659553029
190
image size: 1.9 GB
191
==> Creating release
192
--> release v32 created
193

194
--> You can detach the terminal anytime without stopping the deployment
195
==> Monitoring deployment
197
v32 is being deployed
198
76a90700: yyz pending

seems bugged on windows

By adding --nixpacks the deploys fail. Remove the flag and deploys work.

Just tried this for my brand new, very vanilla Phoenix app. Literally just generated and fly launch’d.

name: Fly Deploy
on:
  push:
    branches:
      - master
env:
  FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
jobs:
  deploy:
      name: Deploy app
      runs-on: ubuntu-latest
      steps:
        - uses: actions/checkout@v2
        - uses: superfly/flyctl-actions/setup-flyctl@master
        - run: flyctl deploy --remote-only --nixpacks

Here’s the logs:

	 Configuring firecracker
	 Starting virtual machine
	 Starting init (commit: c86b3dc)...
	 Setting up swapspace version 1, size = 512 MiB (536866816 bytes)
	 no label, UUID=a5f22059-3b19-44fa-9bb1-cfa54407e1fd
	 Preparing to run: `/app/bin/migrate` as root
	 Error: UnhandledIoError(Os { code: 2, kind: NotFound, message: "No such file or directory" })
	 [    0.131206] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
	 [    0.132337] CPU: 0 PID: 1 Comm: init Not tainted 5.12.2 #1
	 [    0.133006] Call Trace:
	 [    0.133264]  show_stack+0x52/0x58
	 [    0.133619]  dump_stack+0x6b/0x86
	 [    0.133976]  panic+0xfb/0x2bc
	 [    0.134359]  do_exit.cold+0x60/0xb0
	 [    0.134672]  do_group_exit+0x3b/0xb0
	 [    0.135046]  __x64_sys_exit_group+0x18/0x20
	 [    0.135462]  do_syscall_64+0x38/0x50
	 [    0.135820]  entry_SYSCALL_64_after_hwframe+0x44/0xae
	 [    0.136330] RIP: 0033:0x6ff9c5
	 [    0.136837] Code: eb ef 48 8b 76 28 e9 76 05 00 00 64 48 8b 04 25 00 00 00 00 48 8b b0 b0 00 00 00 e9 af ff ff ff 48 63 ff b8 e7 00 00 00 0f 05 <ba> 3c 00 00 00 48 89 d0 0f 05 eb f9 66 2e 0f 1f 84 00 00 00 00 00
	 [    0.138637] RSP: 002b:00007ffc720824a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
	 [    0.139413] RAX: ffffffffffffffda RBX: 00000000004f0ed0 RCX: 00000000006ff9c5
	 [    0.140112] RDX: 00000000009cf4c0 RSI: 0000000000000000 RDI: 0000000000000001
	 [    0.140825] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
	 [    0.141569] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc72082508
	 [    0.142294] R13: 00007ffc72082518 R14: 0000000000000000 R15: 0000000000000000
	 [    0.143058] Kernel Offset: disabled
	 [    0.143420] Rebooting in 1 seconds..
==> Monitoring deployment

v5 is being deployed
699237cc: mia pending
699237cc: mia running healthy
699237cc: mia running unhealthy [health checks: 1 total]
699237cc: mia running unhealthy [health checks: 1 total]
Failed Instances

Failure #1

--> v5 failed - Failed due to unhealthy allocations - rolling back to job version 4 and deploying as v6 
Instance
ID      	PROCESS	VERSION	REGION	DESIRED	STATUS 	HEALTH CHECKS	RESTARTS	CREATED 

699237cc	       	5      	mia   	run    	running	1 total      	2       	15s ago	
--> Troubleshooting guide at https://fly.io/docs/getting-started/troubleshooting/

Error abort
Recent Events

TIMESTAMP           	TYPE      	MESSAGE                         

2022-08-10T04:48:09Z	Received  	Task received by client        	
2022-08-10T04:48:09Z	Task Setup	Building Task Directory        	
2022-08-10T04:48:11Z	Started   	Task started by client         	
2022-08-10T04:48:13Z	Terminated	Exit Code: 0                   	
2022-08-10T04:48:13Z	Restarting	Task restarting in 1.190761974s	
2022-08-10T04:48:19Z	Started   	Task started by client         	
2022-08-10T04:48:21Z	Terminated	Exit Code: 0                   	
2022-08-10T04:48:21Z	Restarting	Task restarting in 1.224775359s	
2022-08-10T04:48:27Z	Started   	Task started by client         	

2022-08-10T04:48:11Z   [info]Starting virtual machine
2022-08-10T04:48:11Z   [info]Starting init (commit: c86b3dc)...
2022-08-10T04:48:11Z   [info]Preparing to run: `bash` as root
2022-08-10T04:48:11Z   [info]2022/08/10 04:48:11 listening on [fdaa:0:570e:a7b:2c00:6992:37cc:2]:22 (DNS: [fdaa::3]:53)
2022-08-10T04:48:12Z   [info]Main child exited normally with code: 0
2022-08-10T04:48:12Z   [info]Starting clean up.
2022-08-10T04:48:18Z   [info]Starting instance
2022-08-10T04:48:18Z   [info]Configuring virtual machine
2022-08-10T04:48:18Z   [info]Pulling container image
2022-08-10T04:48:19Z   [info]Unpacking image
2022-08-10T04:48:19Z   [info]Preparing kernel init
2022-08-10T04:48:19Z   [info]Configuring firecracker
2022-08-10T04:48:19Z   [info]Starting virtual machine
2022-08-10T04:48:19Z   [info]Starting init (commit: c86b3dc)...
2022-08-10T04:48:19Z   [info]Preparing to run: `bash` as root
2022-08-10T04:48:19Z   [info]2022/08/10 04:48:19 listening on [fdaa:0:570e:a7b:2c00:6992:37cc:2]:22 (DNS: [fdaa::3]:53)
2022-08-10T04:48:20Z   [info]Main child exited normally with code: 0
2022-08-10T04:48:20Z   [info]Starting clean up.
2022-08-10T04:48:26Z   [info]Starting instance
2022-08-10T04:48:27Z   [info]Configuring virtual machine
2022-08-10T04:48:27Z   [info]Pulling container image
2022-08-10T04:48:27Z   [info]Unpacking image
2022-08-10T04:48:27Z   [info]Preparing kernel init
2022-08-10T04:48:27Z   [info]Configuring firecracker
2022-08-10T04:48:27Z   [info]Starting virtual machine
2022-08-10T04:48:27Z   [info]Starting init (commit: c86b3dc)...
2022-08-10T04:48:27Z   [info]Preparing to run: `bash` as root
2022-08-10T04:48:27Z   [info]2022/08/10 04:48:27 listening on [fdaa:0:570e:a7b:2c00:6992:37cc:2]:22 (DNS: [fdaa::3]:53)
2022-08-10T04:48:28Z   [info]Main child exited normally with code: 0
2022-08-10T04:48:28Z   [info]Starting clean up.
Error: Process completed with exit code 1.

My Dockerfile:

# Find eligible builder and runner images on Docker Hub. We use Ubuntu/Debian instead of
# Alpine to avoid DNS resolution issues in production.
#
# https://hub.docker.com/r/hexpm/elixir/tags?page=1&name=ubuntu
# https://hub.docker.com/_/ubuntu?tab=tags
#
#
# This file is based on these images:
#
#   - https://hub.docker.com/r/hexpm/elixir/tags - for the build image
#   - https://hub.docker.com/_/debian?tab=tags&page=1&name=bullseye-20210902-slim - for the release image
#   - https://pkgs.org/ - resource for finding needed packages
#   - Ex: hexpm/elixir:1.13.4-erlang-24.3.4-debian-bullseye-20210902-slim
#
ARG ELIXIR_VERSION=1.13.4
ARG OTP_VERSION=24.3.4
ARG DEBIAN_VERSION=bullseye-20210902-slim

ARG BUILDER_IMAGE="hexpm/elixir:${ELIXIR_VERSION}-erlang-${OTP_VERSION}-debian-${DEBIAN_VERSION}"
ARG RUNNER_IMAGE="debian:${DEBIAN_VERSION}"

FROM ${BUILDER_IMAGE} as builder

# install build dependencies
RUN apt-get update -y && apt-get install -y build-essential git \
    && apt-get clean && rm -f /var/lib/apt/lists/*_*

# prepare build dir
WORKDIR /app

# install hex + rebar
RUN mix local.hex --force && \
    mix local.rebar --force

# set build ENV
ENV MIX_ENV="prod"

# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only $MIX_ENV
RUN mkdir config

# copy compile-time config files before we compile dependencies
# to ensure any relevant config change will trigger the dependencies
# to be re-compiled.
COPY config/config.exs config/${MIX_ENV}.exs config/
RUN mix deps.compile

COPY priv priv

COPY lib lib

COPY assets assets

# compile assets
RUN mix assets.deploy

# Compile the release
RUN mix compile

# Changes to config/runtime.exs don't require recompiling the code
COPY config/runtime.exs config/

COPY rel rel
RUN mix release

# start a new build stage so that the final image will only contain
# the compiled release and other runtime necessities
FROM ${RUNNER_IMAGE}

RUN apt-get update -y && apt-get install -y libstdc++6 openssl libncurses5 locales \
  && apt-get clean && rm -f /var/lib/apt/lists/*_*

# Set the locale
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen

ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8

WORKDIR "/app"
RUN chown nobody /app

# set runner ENV
ENV MIX_ENV="prod"

# Only copy the final release from the build stage
COPY --from=builder --chown=nobody:root /app/_build/${MIX_ENV}/rel/nuki ./

USER nobody

CMD ["/app/bin/server"]
# Appended by flyctl
ENV ECTO_IPV6 true
ENV ERL_AFLAGS "-proto_dist inet6_tcp"

Keep in mind, this was totally autogenerated by fly launch, I didn’t edit this file at all.

Nixpacks doesn’t have Elixir support yet, but there’s been some work towards it.

1 Like

How can I pass configuration to nixpacks for things like additional libraries? I don’t see any way to pass the --libs flag and none of the options I can think of for trying to pass the NIXPACKS_LIBS env var (or NIXPACKS_APT_PKGS) have any effect.

The environment variables might work.

It would be a good idea to add a way to specify extra flags for nixpacks w/ something like this: -- --libs.

I tried setting the env vars before invoking flyctl deploy, I tried passing --env, I tried passing --build-arg, I even tried --build-secret, none of them worked. I also tried putting it in fly.toml in the [env] section though I assume that’s identical to the --env flag.

1 Like

Our nixpacks integrations is very basic.

It sounds like we should pass through any NIXPACKS_ prefixed env vars to the nixpacks binary when invoking it.

Essentially modifying this: flyctl/nixpacks_builder.go at 5b2a4e09fb29c2dd89fd39c5755b984d9abefea8 · superfly/flyctl · GitHub

It shouldn’t be incredibly hard, but I can’t guarantee we’ll get to it fast.

I’ve made a pre-release that should pass-through all env vars prefixed with NIXPACKS_ to the nixpacks command.

Depending on how you installed flyctl to begin with, you have a few options:

  • If you installed via homebrew (if you’re on macos), then it might be easier to download the release binary from: Release v0.0.388-pre-1 · superfly/flyctl · GitHub and just running that like ./flyctl.
  • If you’ve installed it from our install script via curl, you can do: curl -L https://fly.io/install.sh | sh -s pre to get a prerelease.

Let me know if it works out for you!

1 Like

My initial build context transfer went from 74MB to 2-300MB before even getting started on the build. This step is quite slow when done on a 20Mbit/sec uplink.

I have a bun/nextjs app and it tried to run npm ci as an added step from the nixpack. This failed, even though my Dockerfile’s first step is to install nodejs and npm. So for me, nixpacks are not currently viable. As a side note, I’m also testing Railway and encounter other problems trying to manually specify my build steps in my Dockerfile, so maybe I’m not using it right in the first place.

This should be fixed on the latest Nixpacks version.
We respect .gitignore now.

1 Like

Running fly deploy --nixpacks --remote-only in 3 ways:

  1. When I don’t have Docker installed on my Apple ARM64 machine, I get the error below.
  2. When I do have Docker installed on the machine, it succeeds.
  3. From a GitHub Action using superfly/flyctl-actions/setup-flyctl@master, it succeeds.
==> Verifying app config
--> Verified app config
==> Building image
Remote builder fly-builder-name ready
Proxying local port /var/folders/random-id/docker.sock to remote [address]:port

╔═══════ Nixpacks v0.5.6 ══════╗
║ setup      │ go_1_18         ║
║──────────────────────────────║
║ install    │ go mod download ║
║──────────────────────────────║
║ build      │ go build -o out ║
║──────────────────────────────║
║ start      │ ./out           ║
╚══════════════════════════════╝

Error: Please install Docker to build the app https://docs.docker.com/engine/install/
Error failed to fetch an image or build from source: exit status 1