Fly builders running out of disk space causing deploy to fail

Starting late yesterday evening our application began failing to deploy due to its build process failing halfway through from disk space exhaustion.

Snippet from the job:

#21 24.06 error: could not compile `crunchy` due to previous error
#21 24.06 warning: build failed, waiting for other jobs to finish...
#21 24.17 error: failed to write /usr/src/app/target/release/deps/rmetaNWeNef/lib.rmeta: No space left on device (os error 28)
#21 24.17 
#21 24.53 LLVM ERROR: IO failure on output stream: No space left on device
#21 24.55 error: build failed

Is there a published limit on the build environment resources that applications can use with remote builders?

Hello! We don’t really have a limit, but remote builders are created with 50gb volumes by default. There’s a few workarounds:

  • delete the remote builder app so you get a fresh one on the next build
  • ssh into the builder and run docker system prune to purge cache/dangling layers

And this might work, though I haven’t tried it:

  • create a new volume with the same name and a bigger size (eg flyctl volumes create <name> --size 100gb) then delete the old one. next start should use the bigger one

Expanding volumes is on our roadmap, but no timeframe.

1 Like

Is there a way to programmatically get at the builder app name? We’re looking to ssh in and run docker prune from github actions after a deploy.

this worked for me:

flyctl apps list | grep fly-builder | head -n1 | awk '{print $1;}' | xargs fly ssh console --command "docker system prune --force" --app


Glad to hear you have a solution!

You may have already seen either of these, but I wanted to highlight them in any case:

  • flyctl supports json output too, with the --json or -j flags
  • there’s also an api endpoint for your organization’s builders, as seen on GraphQL Playground – you can also grab the builder with something like
query { 
   personalOrganization { 
   remoteBuilderApp { 
1 Like

Just to throw a +1 on here, I ran into this myself. I ended up just resizing the volume for now, but it would be nice for builders to have some first class way to manage the docker cache since I imagine the infinite docker cache growth will be a common problem for people.