Fly build hangs sending context to docker daemon ([internal] load remote build context timeout)

I’m attempting to build my app with

 fly deploy --build-target=prod --build-only

And my build appears to be hanging

❯ fly deploy --build-target=prod --build-only
==> Verifying app config
--> Verified app config
==> Building image
Remote builder fly-builder-broken-wood-6619 ready
==> Creating build context
--> Creating build context done
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 597.5s (0/1)
 => [internal] load remote build context                                                                                                                                                                                             597.5s
ERRO[0617] Can't add file /home/jackevans/code/feedreader/.git/objects/a9/1f0f6438a05b3f1da72c1e2180ec9499da8c60 to tar: io: read/write on closed pipe
ERRO[0617] Can't close tar writer: io: read/write on closed pipe
Error failed to fetch an image or build from source: error building: unexpected EOF

Checking the logs for the assigned builder instance, it looks like the build is indeed timing out:

"Deadline reached without docker build"

2022-07-30T12:10:39.954 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:39.954673107Z" level=debug msg="Calling GET /v1.41/containers/json?filters=%7B%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"

2022-07-30T12:10:40.956 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:40.956207629Z" level=debug msg="checking docker activity"

2022-07-30T12:10:40.957 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:40.956769485Z" level=debug msg="Calling GET /v1.41/containers/json?filters=%7B%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"

2022-07-30T12:10:41.959 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:41.958941423Z" level=debug msg="checking docker activity"

2022-07-30T12:10:41.959 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:41.959550969Z" level=debug msg="Calling GET /v1.41/containers/json?filters=%7B%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"

2022-07-30T12:10:42.501 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.500714142Z" level=info msg="Deadline reached without docker build"

2022-07-30T12:10:42.501 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.500797043Z" level=info msg="shutting down"

2022-07-30T12:10:42.503 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503173149Z" level=info msg="gracefully stopped\n"

2022-07-30T12:10:42.503 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503240269Z" level=debug msg="disk space used: 1.03%"

2022-07-30T12:10:42.503 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503263570Z" level=info msg="Waiting for dockerd to exit"

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503641664Z" level=info msg="Processing signal 'interrupt'"

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503953087Z" level=debug msg="daemon configured with a 15 seconds minimum shutdown timeout"

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.503984397Z" level=debug msg="start clean shutdown of all containers with a 15 seconds timeout..."

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.504058738Z" level=debug msg="found 0 orphan layers"

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.504522093Z" level=debug msg="Unix socket /var/run/docker/libnetwork/1088d5552e1c.sock doesn't exist. cannot accept client connections"

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.504565784Z" level=debug msg="Cleaning up old mountid : start."

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.504782696Z" level=debug msg="Cleaning up old mountid : done."

2022-07-30T12:10:42.505 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.504939218Z" level=debug msg="unmounting daemon root" mountpoint=/data/docker

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505201480Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505603055Z" level=debug msg="Clean shutdown succeeded"

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505628675Z" level=info msg="Daemon shutdown complete"

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505665585Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505745946Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.505937708Z" level=debug msg="received signal" signal=terminated

2022-07-30T12:10:42.506 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.506077780Z" level=debug msg="sd notification" error="<nil>" notified=false state="STOPPING=1"

2022-07-30T12:10:42.962 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.961851959Z" level=debug msg="checking docker activity"

2022-07-30T12:10:42.962 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:42.962196672Z" level=debug msg="Calling GET /v1.41/containers/json?filters=%7B%22status%22%3A%7B%22running%22%3Atrue%7D%7D&limit=0"

2022-07-30T12:10:43.516 app[d5683043a64d8e] lhr [info] time="2022-07-30T12:10:43.515368014Z" level=info msg="dockerd has exited"

2022-07-30T12:10:45.403 runner[d5683043a64d8e] lhr [info] machine exited with exit code 0, not restarting

Historically I’ve been able to destroy the builder, and start over with a new builder , i.e. with something like:

fly apps destroy fly-builder-broken-wood-6619

Even after destroying my builder and attempting to deploy with a new instance my build is still hanging.

Unsure what could be the underlying cause here, am I being rate limited?

1 Like

For anyone else that encounters the same problem, I realised that instead I could pass the --local-only flag as I don’t strictly need a build server (I have the docker daemon running locally)

fly deploy --local-only 
1 Like

This is most likely because the “docker context” is very large. Does it say how much data it’s trying to send? --local-only is a good workaround since it doesn’t have to push it over a network.

I had this error as well. The context size was only 63.7kb which left me puzzled (my network is slow but not that slow :slightly_smiling_face:)

Also just switched to --local-only after i couldn’t resolve the issue.

1 Like

I had this error as well.

1 Like

This happens every time with my Fly projects. It makes no difference what size of container, anything except for a --local-only deploy will hang indefinitely.

This worked in my case.

I had the same issue, tested with the image from the example Serve Small With Fly.io and GoStatic.

# Dockerfile
FROM pierrezemb/gostatic

COPY ./public/ /srv/http/

image

It happens quite randomly, sometimes fly apps destroy your-builder-app helps, sometimes not. I can suppose that it may happen more frequently when running one deploy right after another and the chance is lesser when some time is passed after a previous build.

fly deploy --local-only helps and runs stable in this case.

1 Like

Bumping into this constantly as well. Rarely, destroying the builder helps. Not sure how this is an issue when my context is pretty small (118 directories, 456 files). It’s a Laravel app that doesn’t have a lot of files, and dockerignore has been configured to be opt-in by directory. Running local-only obviously works, and is totally fine, but I think it would be nice if this could be figured out.

I’ve also been having this issue for the past couple of days. I did a wireguard reset which worked for a while but now I’ve had to go back to --local-only