Failure with flyctl deploy using the Rails buildpack

First time Fly user here. I have tried building a (fairly empty) Rails app to start out, and am having a failure on each use of fly deploy. I have retried about 10 times, including with a local Docker installation (which I have never used before), over the last 12 hours to rule out temporary network connectivity/etc issues, tried deleting the builder instance about 5 times, etc, but always get stuck after the 1st or 2nd buildpack download.

Pulling image Docker
20: Pulling from heroku/buildpacks
… snip of downloading interface

The instances then hit timeout and get forced shut down by the platform, e.g.

2022-03-23T13:53:56.200 app[2d24b200] hkg [info] time=“2022-03-23T13:53:56.199976148Z” level=info msg=“Deadline reached without docker build”

Is there perhaps some network issue happening between your hkg datacenter and Heroku? I’m trying to put the app in nrt but believe I’m getting auto-assigned to Hong Kong for builders at least. I was able to succesfully start a postgres cluster in nrt.

What does the Fly-Region header say on https://debug.fly.dev?

Also, what error did you see when you ran Docker locally?

Both the header and the ENV section say hkg.

I have lost the error from when I tried locally. Sorry, was in bash-at-keyboard mode at that point, and I have zero Docker experience so didn’t try debugging beyond doing the quickstart to install onto my Ubuntu VM then trying LOG_LEVEL=debug flyctl deploy --local-only. Will try again after the current remote build completes (successfully or not).

Edit to add:

The output from a retry on Docker:

=> Building image with Buildpacks
--> docker host: 20.10.13 linux aarch64
Pulling image index.docker.io/heroku/buildpacks:20
20: Pulling from heroku/buildpacks
Digest: sha256:899e9d8814f8f6d741ef89c8a76c57333f5755b6dfeb570dd4cdedab03967576
Status: Image is up to date for heroku/buildpacks:20
Selected run image heroku/pack:20
Pulling image heroku/pack:20
20: Pulling from heroku/pack
Digest: sha256:dc1c6676017aaa9e92e18658b5bba1209920dc2e9a780ff7b790b14b45c96161
Status: Image is up to date for heroku/pack:20
Creating builder with the following buildpacks:
-> heroku/java@0.3.15
-> heroku/gradle@0.0.35
-> heroku/jvm@0.1.14
-> heroku/maven@0.2.6
-> heroku/procfile@0.6.2
-> heroku/scala@0.0.92
-> heroku/java-function@0.3.28
-> heroku/jvm@0.1.14
-> heroku/jvm-function-invoker@0.6.1
-> heroku/maven@0.2.6
-> heroku/ruby@0.1.3
-> heroku/procfile@0.6.2
-> heroku/python@0.3.1
-> heroku/php@0.3.1
-> heroku/go@0.3.1
-> heroku/nodejs@0.5.0
-> heroku/nodejs-engine@0.8.0
-> heroku/nodejs-npm@0.5.0
-> heroku/nodejs-yarn@0.2.0
-> heroku/procfile@0.6.2
-> heroku/nodejs-function@0.8.0
-> heroku/nodejs-engine@0.8.0
-> heroku/nodejs-function-invoker@0.2.10
-> heroku/nodejs-npm@0.5.0
Using build cache volume pack-cache-salaryman_cache-9fcb83ebabe0.build
Running the creator on OS linux with:
Container Settings:
  Args: /cnb/lifecycle/creator -daemon -launch-cache /launch-cache -log-level debug -app /workspace -cache-dir /cache -run-image heroku/pack:20 -tag registry.fly.io/salaryman:deployment-1648045351 -gid 0 registry.fly.io/salaryman:cache
  System Envs: CNB_PLATFORM_API=0.6
  Image: pack.local/builder/746f66656a7070746579:latest
  User: root
  Labels: map[author:pack]
Host Settings:
  Binds: pack-cache-salaryman_cache-9fcb83ebabe0.build:/cache /var/run/docker.sock:/var/run/docker.sock pack-cache-salaryman_cache-9fcb83ebabe0.launch:/launch-cache pack-layers-tcuinbqtnk:/layers pack-app-qdrtnpyufr:/workspace
  Network Mode: 

It appears to hang this point. I have less confidence that my Docker installation is working properly than Fly is; in particular, this is running in a Vagrant machine which may not have great ability to automatically configure networking (due to Apple Mac M1 / VMWare Fusion only-partial-compatibility tomfoolery).

The local Docker won’t time a build out when it’s been hung for a while. If local Docker is failing it might give you a better idea of what’s up.

These build packs are huge and slow to download/generate. I’m sad but not 100% surprised buildpacks are working poorly in Hong Kong.

We should also try and get you routed to not-Hong Kong. Will you run a traceroute or mtr to debug.fly.dev and put it here?

Traceroute output:

traceroute to debug.fly.dev (77.83.140.164), 30 hops max, 60 byte packets
 1  172.16.86.2 (172.16.86.2)  0.341 ms  0.309 ms  0.298 ms
 2  192.168.3.1 (192.168.3.1)  2.609 ms  5.200 ms  5.190 ms
 3  * * *
 4  softbank221110214205.bbtec.net (221.110.214.205)  25.166 ms  39.076 ms  39.066 ms
 5  softbank221110184003.bbtec.net (221.110.184.3)  29.609 ms  29.596 ms  29.569 ms
 6  * * *
 7  * * *
 8  * * *
 9  softbank221110131230.bbtec.net (221.110.131.230)  34.994 ms  34.970 ms  34.948 ms
10  HundredGE0-3-0-0.br03.hkg12.pccwbtn.net (63.218.174.189)  80.758 ms HundredGE0-7-0-2.br03.hkg12.pccwbtn.net (63.218.174.113)  83.092 ms *
11  TenGE0-1-0-1.br03.hkg12.pccwbtn.net (63.218.174.37)  93.932 ms HundredGE0-3-0-0.br03.hkg12.pccwbtn.net (63.218.174.189)  80.663 ms HundredGE0-3-0-1.br03.hkg12.pccwbtn.net (63.218.174.201)  75.297 ms
12  * * *
13  * static.anycast.net (103.84.152.37)  78.333 ms *
14  static.anycast.net (103.84.152.37)  84.976 ms * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

See how long it hangs. Buildpacks generate a whole bunch of intermediate layers without much good output, it wouldn’t surprise me if you get 10+ minutes of it prepping the environment and then continues.

Got this after about 20 minutes (on local Docker):

Network Mode: 
DEBUG result image:<nil> error:executing lifecycle: archive/tar: write too long
Error failed to fetch an image or build from source: executing lifecycle: archive/tar: write too long

Oh boy. That’s a mess.

Is this a Rails 7 app? You will likely have more luck with a Dockerfile than a Buildpack, and Rails 7 doesn’t need most of what’s in a buildpack.

To use a Dockerfile, remove the [build] block in fly.toml and create a Dockerfile in your project this. This one should work: rails7-on-docker/Dockerfile at main · ryanwi/rails7-on-docker · GitHub

This is not just a you problem, for what it’s worth. Buildpacks were a good shortcut for Rails apps but they’re flakey and brittle and we have a better setup coming real-soon-now.

1 Like

Gotcha! Thanks for the help. I found a Japanese post with a cookie cutter Dockerfile for Rails 6 / NodeJS and will try editing my way towards victory. Will let you know if it works.

1 Like

Here’s another simple one that works well on Fly: rails-nix/Dockerfile at main · fly-apps/rails-nix · GitHub.

Do you need webpack?

@patio11 when you get a moment, will you look at https://debug.fly.dev again and see if you’re hitting something better than Hong Kong?

Documenting some of what I’ve learned for the benefit of future users trying to search for keywords:

Fly blows up by default if you are building a Dockerfile locally w/ Vagrant:

This is because you’ll recursively send the .vagrant directory to Docker as context to the Docker daemon, which will get larger, causing the context to get larger, etc etc you will be sad. Also, large Docker contexts means more needs to go over the wire (either to Docker locally or to Fly).

Solution: Use .dockerignore aggressively. Mine excludes the Vagrant VM (necessary) and strongly consider excluding redundant copies of your node dependencies. Because my app pulls in FontAwesome (which weighs in at 400MB), both node_modules and public/packs get very large. Rather than shipping them over the wire to Fly on deploy, I’d rather have Fly grab those dependencies from NPM from its datacenter (which hopefully has better bandwidth than my house) and then cache them using standard Docker behavior.

.git
.vagrant
node_modules
tmp
log
public/assets
public/packs
.bundle

And here is my Dockerfile, which now works for Rails 6 with Tailscale enabled, if anyone is looking for inspiration in the future. It heavily relies on code cribbed from @jsierles’s above but does not explicitly target Nix, simply because I don’t know what that is. (Again, very, very new to Docker here.)

2 Likes

.vagrant isn’t something I’ve seen in a Docker context before. We definitely should throw a huge number of warnings when the context balloons.

1 Like