I am trying to deploy a Swift-Server app using the following Dockerfile:
# ================================
# Build image
# ================================
FROM swift:6.0-noble as build
# Install OS updates
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& apt-get install -y libjemalloc-dev \
&& rm -rf /var/lib/apt/lists/*
# Set up a build area
WORKDIR /build
# First just resolve dependencies.
# This creates a cached layer that can be reused
# as long as your Package.swift/Package.resolved
# files do not change.
COPY ./Package.* ./
RUN swift package resolve
# Copy entire repo into container
COPY . .
# Build everything, with optimizations, with static linking, and using jemalloc
RUN swift build -c release \
--static-swift-stdlib \
-Xlinker -ljemalloc
# Switch to the staging area
WORKDIR /staging
# Copy main executable to staging area
RUN cp "$(swift build --package-path /build -c release --show-bin-path)/App" ./
# Copy static swift backtracer binary to staging area
RUN cp "/usr/libexec/swift/linux/swift-backtrace-static" ./
# Copy resources bundled by SPM to staging area
RUN find -L "$(swift build --package-path /build -c release --show-bin-path)/" -regex '.*\.resources$' -exec cp -Ra {} ./ \;
# Copy any resources from the public directory and views directory if the directories exist
# Ensure that by default, neither the directory nor any of its contents are writable.
RUN [ -d /build/public ] && { mv /build/public ./public && chmod -R a-w ./public; } || true
# ================================
# Run image
# ================================
FROM ubuntu:noble
# Make sure all system packages are up to date, and install only essential packages.
RUN export DEBIAN_FRONTEND=noninteractive DEBCONF_NONINTERACTIVE_SEEN=true \
&& apt-get -q update \
&& apt-get -q dist-upgrade -y \
&& apt-get -q install -y \
libjemalloc2 \
ca-certificates \
tzdata \
# If your app or its dependencies import FoundationNetworking, also install `libcurl4`.
# libcurl4 \
# If your app or its dependencies import FoundationXML, also install `libxml2`.
# libxml2 \
&& rm -r /var/lib/apt/lists/*
# Create a hummingbird user and group with /app as its home directory
RUN useradd --user-group --create-home --system --skel /dev/null --home-dir /app hummingbird
# Switch to the new home directory
WORKDIR /app
# Copy built executable and any staged resources from builder
COPY --from=build --chown=hummingbird:hummingbird /staging /app
# Provide configuration needed by the built-in crash reporter and some sensible default behaviors.
ENV SWIFT_BACKTRACE=enable=yes,sanitize=yes,threads=all,images=all,interactive=no,swift-backtrace=./swift-backtrace-static
# Ensure all further commands run as the hummingbird user
USER hummingbird:hummingbird
# Let Docker bind to port 8080
EXPOSE 8080
# Start the Hummingbird service when the image is run, default to listening on 8080 in production environment
ENTRYPOINT ["./App"]
CMD ["--hostname", "0.0.0.0", "--port", "8080"]
When this is executed, I keep getting the following error:
#18 [build 9/12] RUN cp "$(swift build --package-path /build -c release --show-bin-path)/App" ./
#18 1.009 cp: cannot stat '/build/.build/x86_64-unknown-linux-gnu/release/App': No such file or directory
#18 ERROR: process "/bin/sh -c cp \"$(swift build --package-path /build -c release --show-bin-path)/App\" ./" did not complete successfully: exit code: 1
Does anyone see any glaring mistakes with the Dockerfile? I am wondering if this is an issue with using an unsupported OS on the server.
This looks like the problem. Something tried to determine the arch, and it came up with “unknown”. Try adding a RUN ls -l /build/.build/ line before this to see what folder is in there; once you find out what it is, you can probably hard-wire it in the cp command.
CACHED is Docker output, so I wonder if the build failed, or this is not the right folder. What was the build output of RUN swift build? If it is not very illuminating you could consider adding a verbose flag to it.
Ah, that’s all cached by Docker, so there is no useful build output. Assuming you’re using docker build, add --no-cache to that, so it builds from scratch.
Hmm where do I add this? You can see my entire Dockerfile in the first post. I’m really not all that familiar with deploying apps hence the struggles here.
I’m triggering the deployment from the Fly.io dashboard. It’s just hooked up to my main branch and looks if a Dockerfile is present from what I can tell.
Ah, OK. There should be a way to do a build that empties the cache. If you can’t see one, you could install Docker locally and do docker build -t my-image-name . in your project (assuming you have a Dockerfile in the root of your repo).
Or if you have flyctl installed locally then consider flyctl deploy, though you’ll also need a fly.toml for that (this is the most established way of deploying on Fly). A deploy this way will do a remote build, and this also supports --no-cache.
Is there an example fly.toml file I can check for reference? I do have the flyctl installed. The Dockerfile I am using is mine and I have it locally. I can try to run the command you mentioned locally to clear the cache and see what it outputs.
No worries. I seem to recall it being mentioned in this forum that the GitHub deployment system effectively writes a TOML file for you, though it doesn’t always get it right. I personally would install Docker locally, and then you can just do a build without an additional config file.
You can do this with Linux, or Windows with WSL. For Macs the arch may be different, but it may be worth a go. You’re really just looking to see where the binary is compiled, so you can copy it in a subsequent command. This will let you fix the Dockerfile and then carry on using remote builds.
So the issue was the project setup. The Dockerfile references ./App as the root and entry point, yet I had renamed the file so now that I updated that things run just fine.