Since new I have the problem that this error is happening on deployment:
> fly deploy --dockerfile Dockerfile.staging -c fly.toml
==> Verifying app config
--> Verified app config
==> Building image
Remote builder fly-builder-hidden-frost-8022 ready
==> Creating build context
--> Creating build context done
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 177.5s (0/1)
[+] Building 8.0s (29/32)
=> [internal] load remote build context 0.0s
=> copy /context / 4.3s
=> [internal] load metadata for docker.io/hexpm/elixir:1.13.0-erlang-23.3.4.2-debian-buster-20210902-slim 1.1s
=> [internal] load metadata for docker.io/library/debian:bullseye-20210902-slim 1.0s
=> [builder 1/19] FROM docker.io/hexpm/elixir:1.13.0-erlang-23.3.4.2-debian-buster-20210902-slim@sha256:574e46003059b80faa669d3ff93bdb45c0b503c752e7ab78783a37c6c748858f 0.0s
=> [stage-1 1/9] FROM docker.io/library/debian:bullseye-20210902-slim@sha256:e3ed4be20c22a1358020358331d177aa2860632f25b21681d79204ace20455a6 0.0s
=> CACHED [builder 2/19] RUN apt-get update -y && apt-get install -y build-essential git curl && apt-get clean && rm -f /var/lib/apt/lists/*_* 0.0s
=> CACHED [builder 3/19] RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" && case "${dpkgArch##*-}" in amd64) ARCH='x64';; ppc64el) ARCH='ppc64le';; s390x) ARCH='s390x';; arm64) ARCH='arm64';; armhf 0.0s
=> CACHED [builder 4/19] WORKDIR /app 0.0s
=> CACHED [builder 5/19] RUN mix local.hex --force && mix local.rebar --force 0.0s
=> CACHED [builder 6/19] COPY mix.exs mix.lock ./ 0.0s
=> CACHED [builder 7/19] RUN mix deps.get --only prod 0.0s
=> CACHED [builder 8/19] RUN mkdir config 0.0s
=> CACHED [builder 9/19] COPY config/config.exs config/staging.exs config/ 0.0s
=> CACHED [builder 10/19] RUN mix deps.compile 0.0s
=> CACHED [builder 11/19] COPY priv priv 0.0s
=> CACHED [builder 12/19] COPY assets assets 0.0s
=> CACHED [builder 13/19] COPY lib lib 0.0s
=> CACHED [builder 14/19] RUN cd assets && npm install && npm run deploy 0.0s
=> CACHED [builder 15/19] RUN mix phx.digest 0.0s
=> CACHED [builder 16/19] RUN mix compile 0.0s
=> CACHED [builder 17/19] COPY config/runtime.exs config/ 0.0s
=> CACHED [builder 18/19] COPY rel rel 0.0s
=> CACHED [builder 19/19] RUN mix release 0.0s
=> CACHED [stage-1 2/9] RUN apt-get update -y && apt-get install -y libstdc++6 openssl libncurses5 locales imagemagick && apt-get clean && rm -f /var/lib/apt/lists/*_* 0.0s
=> CACHED [stage-1 3/9] RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen 0.0s
=> CACHED [stage-1 4/9] WORKDIR /app 0.0s
=> CACHED [stage-1 5/9] RUN chown nobody /app 0.0s
=> ERROR [stage-1 6/9] COPY --from=builder --chown=nobody:root /app/_build/prod/rel ./ 0.0s
------
> [stage-1 6/9] COPY --from=builder --chown=nobody:root /app/_build/prod/rel ./:
------
Error failed to fetch an image or build from source: error building: failed to compute cache key: failed to walk /data/docker/tmp/buildkit-mount104739917/app/_build/prod: lstat /data/docker/tmp/buildkit-mount104739917/app/_build/prod: no such file or directory
make: *** [Makefile:64: deploy] Error 1
It might be, because I created a newly staging version of my Dockerfile, since the MIX_ENV
var does not seem to have never been properly passed to the Dockerfile through the toml file.
My toml file looks like this:
app = "myapp-staging"
kill_signal = "SIGTERM"
kill_timeout = 5
processes = []
[deploy]
release_command = "/app/entry eval Myapp.Release.migrate"
[env]
MIX_ENV = "staging" # <--- This does not seem to work, hence I am trying the second Dockerfile approach
[experimental]
allowed_public_ports = []
auto_rollback = true
[[services]]
http_checks = []
internal_port = 4000
processes = ["app"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.ports]]
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
[[services.tcp_checks]]
grace_period = "30s" # allow some time for startup
interval = "15s"
restart_limit = 0
timeout = "2s"
and my Dockerfile.staging
:
# Find eligible builder and runner images on Docker Hub. We use Ubuntu/Debian instead of
# Alpine to avoid DNS resolution issues in production.
#
# https://hub.docker.com/r/hexpm/elixir/tags?page=1&name=ubuntu
# https://hub.docker.com/_/ubuntu?tab=tags
#
#
# This file is based on these images:
#
# - https://hub.docker.com/r/hexpm/elixir/tags - for the build image
# - https://hub.docker.com/_/debian?tab=tags&page=1&name=bullseye-20210902-slim - for the release image
# - https://pkgs.org/ - resource for finding needed packages
# - Ex: hexpm/elixir:1.12.3-erlang-24.1.4-debian-bullseye-20210902-slim
#
ARG BUILDER_IMAGE="hexpm/elixir:1.13.0-erlang-23.3.4.2-debian-buster-20210902-slim"
ARG RUNNER_IMAGE="debian:bullseye-20210902-slim"
FROM ${BUILDER_IMAGE} as builder
# install build dependencies
RUN apt-get update -y && apt-get install -y build-essential git curl \
&& apt-get clean && rm -f /var/lib/apt/lists/*_*
# Needed for webpack (not needed when using esbuild)
ENV NODE_VERSION 16.13.1
# install node.js & npm (copied from node dockerfile)
RUN ARCH= && dpkgArch="$(dpkg --print-architecture)" \
&& case "${dpkgArch##*-}" in \
amd64) ARCH='x64';; \
ppc64el) ARCH='ppc64le';; \
s390x) ARCH='s390x';; \
arm64) ARCH='arm64';; \
armhf) ARCH='armv7l';; \
i386) ARCH='x86';; \
*) echo "unsupported architecture"; exit 1 ;; \
esac \
# gpg keys listed at https://github.com/nodejs/node#release-keys
&& set -ex \
&& for key in \
4ED778F539E3634C779C87C6D7062848A1AB005C \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
74F12602B6F1C4E913FAA37AD3A89613643B6201 \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
8FCCA13FEF1D0C2E91008E09770F7A9A5AE15600 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
C82FA3AE1CBEDC6BE46B9360C43CEC45C17AB93C \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
A48C2BEE680E841632CD4E44F07496B3EB3C1762 \
108F52B48DB57BB0CC439B2997B01419BD92F80A \
B9E2F5981AA6E0CD28160D9FF13993A75599653C \
; do \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$key" || \
gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key" ; \
done \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-$ARCH.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-$ARCH.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-$ARCH.tar.xz" -C /usr/local --strip-components=1 --no-same-owner \
&& rm "node-v$NODE_VERSION-linux-$ARCH.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& ln -s /usr/local/bin/node /usr/local/bin/nodejs \
# smoke tests
&& node --version \
&& npm --version
# prepare build dir
WORKDIR /app
# install hex + rebar
RUN mix local.hex --force && \
mix local.rebar --force
# set build ENV
ENV MIX_ENV="staging"
# install mix dependencies
COPY mix.exs mix.lock ./
RUN mix deps.get --only prod
RUN mkdir config
# copy compile-time config files before we compile dependencies
# to ensure any relevant config change will trigger the dependencies
# to be re-compiled.
COPY config/config.exs config/${MIX_ENV}.exs config/
RUN mix deps.compile
COPY priv priv
# note: if your project uses a tool like https://purgecss.com/,
# which customizes asset compilation based on what it finds in
# your Elixir templates, you will need to move the asset compilation
# step down so that `lib` is available.
COPY assets assets
# Compile the release (also needed for tailwind)
COPY lib lib
# For Phoenix 1.6 and later, compile assets using esbuild
# RUN mix assets.deploy
# For Phoenix versions earlier than 1.6, compile assets npm
RUN cd assets && npm install && npm run deploy
RUN mix phx.digest
RUN mix compile
# Changes to config/runtime.exs don't require recompiling the code
COPY config/runtime.exs config/
COPY rel rel
RUN mix release
# start a new build stage so that the final image will only contain
# the compiled release and other runtime necessities
FROM ${RUNNER_IMAGE}
RUN apt-get update -y && apt-get install -y libstdc++6 openssl libncurses5 locales imagemagick \
&& apt-get clean && rm -f /var/lib/apt/lists/*_*
# Set the locale
RUN sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen && locale-gen
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
WORKDIR "/app"
RUN chown nobody /app
# Only copy the final release from the build stage
COPY --from=builder --chown=nobody:root /app/_build/prod/rel ./
COPY priv/import_data priv/import_data
COPY priv/mail_templates priv/mail_templates
USER nobody
# Create a symlink to the command that starts your application. This is required
# since the release directory and start up script are named after the
# application, and we don't know that name.
RUN set -eux; \
ln -nfs /app/$(basename *)/bin/$(basename *) /app/entry
CMD /app/entry start
Is there something wrong with my builder instance? It is like this since two days.
I also have the feeling that the step with “sending the build info to docker” takes way more time then it used to.
If you need more info, I will happily provide it. I am a bit in a loss here.