.dockerignore being ... ignored?

I’m having trouble excluding files for deployment with a .dockerignore. I’ve validated that files are excluded if I locally build a docker using this Stackoverflow answer.

But when I run flyctl deploy --remote-only , the excluded files and directories still end up on the app server. For example, the whole .git/ folder is sent, .env.local is sent, etc.

This worked for me earlier and I’m not sure what has changed.

I’m running on an M1 Mac, flyctl v0.0.303 darwin/arm64

Here is my .dockerignore (I was playing with variations of the * for .env).

Why is it not excluding .env.local and the .git/ tree? Any suggestions how to debug?


I have seen various people report issues when using an M1 Mac, all related to Docker. Hmm.

One thing you could maybe try is what I do in my .dockerignore file. Which is to exclude everything and then only add back things you know you need deploying. Doing that automatically skips the git folder, and any other thing that may accidentally get added. The approach works like this (in the case of a node app). So it inverts it, using the ! in step 2:

# 1. Ignore everything

# 2. Add files and directories that should be included

# 3. Bonus step: ignore any unnecessary files that may be inside those allowed directories in 2

We use the same “code path” as Docker to parse exclusions from .dockerignore.

@BobalsDelicious You mentioned this worked before? Was it the same .dockerignore?

This could be a regression in a newer version of flyctl.

@jerome I thought about a potential regression too. I rolled back to flyctl v0.301 (found the old flyctl brew file and invoked it locally) but am still experiencing the same issue.

Inspired by @greg , I also changed my .dockerignore to just:


and the thing still built and all files are still on the server(!) . Deploy command is flyctl deploy --remote-only

In other words, the .dockerignore is not being picked up. I must have something basic wrong but I can’t figure out what.

Fly.toml contents in case it helps:

app = "my-app"

kill_signal = "SIGINT"
kill_timeout = 5
processes = []

  builder = "heroku/buildpacks:20"

  PORT = "8080"

  allowed_public_ports = []
  auto_rollback = true

  http_checks = []
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

    hard_limit = 25
    soft_limit = 20
    type = "connections"

    handlers = ["http"]
    port = 80

    handlers = ["tls", "http"]
    port = 443

    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

Did it previously work without a buildpack? I think that might be the issue in flyctl. I’m looking into that now.

Thanks so much for looking at this.

I didn’t intentionally create a buildpack, this is what flyctl launch created. I don’t think the config changed.

I just re-created a minimal new fly app and launched it, and still have the issue. Here’s my command sequence:

mkdir 3 && cd 3
echo "fly.toml" > .dockerignore
echo '{ "scripts": { "build": "next build", "start": "next start" } }' > package.json
npm i next
mkdir pages
flyctl launch

In flyctl I chose:

  • Name: auto-gen
  • Region: IAD
  • Create a DB: no
  • Deploy now: yes

Expected: fly.toml should not be present in /app/ directory when examined with flyctl ssh console because of the .dockerignore entry.
Got: fly.toml is present

Are you able to repro that?

(BTW it’s so cool that it’s so easy to spin up an app!)

I have fixed the issue and released a new flyctl version (0.0.304). If you upgrade and deploy again, it should work as intended.


Wow thanks for the support. You guys are going to rule the world if you can remain this responsive as you grow :slight_smile:

It did fix it indeed. I must have been wrong about this working previously?

Is the underlying issue this?


I’m seeing a similar issue where we’ve created a nodejs app with flyctl launch and we’ve created a .dockerignore file in the root of the project that is supposed to ignore node_modules.

Yet, we are seeing deploys of over 300 MB. Our app, minus node_modules should weigh in at just under 6 MB atm. I’m just conscious that this might get a lot bigger as time goes on, or if we decide to migrate some of our larger apps to Fly.

Does anyone know why this might be? I’m not hugely familiar with Docker (coming from Heroku where we’ve been running most of our apps in the past), I’m just running off the documentation and community posts.

Any help appreciated!

flyctl version

flyctl v0.0.372 darwin/amd64 Commit: 33637703 BuildDate: 2022-08-12T18:24:17Z

Build Log

83d85471d9f8: Layer already exists 
3caf5f902ea2: Layer already exists 
9497805c7bd5: Layer already exists 
594695864d2a: Pushing  92.93MB/306.8MB
b380ab35c19f: Layer already exists 
036cd2b49e17: Layer already exists 
96df83a8f9c7: Layer already exists 
b9ca82ca75b8: Layer already exists 
de57c585fb49: Layer already exists 
b7b1c4143f6f: Layer already exists 
c3f11d77a5de: Layer already exists 




# OS

# Git

I’m also not a docker expert. The original problem I posted was fixed by the Fly Guys. But here’s what I’ve learned:

Each Docker layer is one statement from your Dockerfile (although I think some minor commands like ENV get squashed into other layers). So you’re pushing something that is 300MB. It could be from an apt-get update & install, or it could be your node_modules, or something else.

The way I’d approach it:

  1. Check what files are included in the docker image from your COPY . command (assuming you have a copy command like that). I like using this Stackoverflow answer from “Lucas”.

  2. Log into the running image with ssh, and see if it has any large packages or directories you don’t want with df -k, du -sk , etc as you like. If you identify anything wrong, you need to go back and adjust your recipe.

  3. Once you’re happy with the image, you can rearrange your Dockerfile to put things that change as late as possible. Docker rebuilds (and Fly retransmits) any layer after the first one that changes. For example, if you have a COPY . ., every command after that one will be rerun with any file change, generate a new layer, and go up to the server. Anything before that command wouldn’t get rebuilt.

Just to clarify- the .dockerignore file excludes files from the build context sent to the Docker daemon, it doesn’t do anything to the Docker image sent to the Docker registry. If your fully-built app requires 300MB of Node modules to run, it’s going to need a 300MB image, even if the source files are only 6MB.


Gotcha. So it’s not possible to not upload node_modules as part of flyctl deploy? It would be more favorable to have them pull from npm like Heroku does.

It is already pulling them from npm as part of the build.

When you use a remote builder (should be the default, or add --remote-only), flyctl uploads a (6MB) ‘build context’ from your local machine to the remote Docker daemon, and anything in .dockerignore is excluded from this context. Next, the builder executes your Dockerfile, which pulls stuff from npm and builds your app, creating a (300MB) image. Finally, the builder uploads the built image to the registry. This last step is fast because the image is just getting shuffled from the remote Docker-daemon builder to the remote registry- your local machine never touches the 300MB image.


Ahh I see! Thanks for explaining that @wjordan, that makes a lot more sense now.

@wjordan that description doesn’t match what I’m seeing.

Unfortunately I’m getting pretty slow upload speeds currently, and when I try to deploy from my local machine, I get a timeout after 10 minutes with something in the neighborhood of 400MB uploaded.

I’m using --remote-only option, and I have a .dockerignore that includes the lines




but I can tell they are NOT getting ignored because they are often included in the stack trace when the upload finally crashes:

==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 596.4s (0/1)
 => [internal] load remote build context                                                    596.4s
ERRO[0602] Can't add file ~/git/app/.git/objects/pack/pack-2047088e5c12ab2ef55b247db87a22bcc76840f3.pack to tar: io: read/write on closed pipe
ERRO[0602] Can't close tar writer: io: read/write on closed pipe
Error failed to fetch an image or build from source: error building: unexpected EOF

I’ve been testing deployments a few different ways and can get everything uploaded in a reasonable amount of time by deleting those directories from my local repo before deploying, but that’s more of a proof of concept than a realistic long term solution.