I have seen various people report issues when using an M1 Mac, all related to Docker. Hmm.
One thing you could maybe try is what I do in my .dockerignore file. Which is to exclude everything and then only add back things you know you need deploying. Doing that automatically skips the git folder, and any other thing that may accidentally get added. The approach works like this (in the case of a node app). So it inverts it, using the ! in step 2:
# 1. Ignore everything
# 2. Add files and directories that should be included
# 3. Bonus step: ignore any unnecessary files that may be inside those allowed directories in 2
I’m seeing a similar issue where we’ve created a nodejs app with flyctl launch and we’ve created a .dockerignore file in the root of the project that is supposed to ignore node_modules.
Yet, we are seeing deploys of over 300 MB. Our app, minus node_modules should weigh in at just under 6 MB atm. I’m just conscious that this might get a lot bigger as time goes on, or if we decide to migrate some of our larger apps to Fly.
Does anyone know why this might be? I’m not hugely familiar with Docker (coming from Heroku where we’ve been running most of our apps in the past), I’m just running off the documentation and community posts.
I’m also not a docker expert. The original problem I posted was fixed by the Fly Guys. But here’s what I’ve learned:
Each Docker layer is one statement from your Dockerfile (although I think some minor commands like ENV get squashed into other layers). So you’re pushing something that is 300MB. It could be from an apt-get update & install, or it could be your node_modules, or something else.
The way I’d approach it:
Check what files are included in the docker image from your COPY . command (assuming you have a copy command like that). I like using this Stackoverflow answer from “Lucas”.
Log into the running image with ssh, and see if it has any large packages or directories you don’t want with df -k, du -sk , etc as you like. If you identify anything wrong, you need to go back and adjust your recipe.
Once you’re happy with the image, you can rearrange your Dockerfile to put things that change as late as possible. Docker rebuilds (and Fly retransmits) any layer after the first one that changes. For example, if you have a COPY . ., every command after that one will be rerun with any file change, generate a new layer, and go up to the server. Anything before that command wouldn’t get rebuilt.
Just to clarify- the .dockerignore file excludes files from the build context sent to the Docker daemon, it doesn’t do anything to the Docker image sent to the Docker registry. If your fully-built app requires 300MB of Node modules to run, it’s going to need a 300MB image, even if the source files are only 6MB.
It is already pulling them from npm as part of the build.
When you use a remote builder (should be the default, or add --remote-only), flyctl uploads a (6MB) ‘build context’ from your local machine to the remote Docker daemon, and anything in .dockerignore is excluded from this context. Next, the builder executes your Dockerfile, which pulls stuff from npm and builds your app, creating a (300MB) image. Finally, the builder uploads the built image to the registry. This last step is fast because the image is just getting shuffled from the remote Docker-daemon builder to the remote registry- your local machine never touches the 300MB image.
@wjordan that description doesn’t match what I’m seeing.
Unfortunately I’m getting pretty slow upload speeds currently, and when I try to deploy from my local machine, I get a timeout after 10 minutes with something in the neighborhood of 400MB uploaded.
I’m using --remote-only option, and I have a .dockerignore that includes the lines
but I can tell they are NOT getting ignored because they are often included in the stack trace when the upload finally crashes:
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 596.4s (0/1)
=> [internal] load remote build context 596.4s
ERRO Can't add file ~/git/app/.git/objects/pack/pack-2047088e5c12ab2ef55b247db87a22bcc76840f3.pack to tar: io: read/write on closed pipe
ERRO Can't close tar writer: io: read/write on closed pipe
Error failed to fetch an image or build from source: error building: unexpected EOF
I’ve been testing deployments a few different ways and can get everything uploaded in a reasonable amount of time by deleting those directories from my local repo before deploying, but that’s more of a proof of concept than a realistic long term solution.