First time Fly user here. I have tried building a (fairly empty) Rails app to start out, and am having a failure on each use of fly deploy. I have retried about 10 times, including with a local Docker installation (which I have never used before), over the last 12 hours to rule out temporary network connectivity/etc issues, tried deleting the builder instance about 5 times, etc, but always get stuck after the 1st or 2nd buildpack download.
Pulling image Docker Hub
20: Pulling from heroku/buildpacks
… snip of downloading interface
The instances then hit timeout and get forced shut down by the platform, e.g.
Is there perhaps some network issue happening between your hkg datacenter and Heroku? I’m trying to put the app in nrt but believe I’m getting auto-assigned to Hong Kong for builders at least. I was able to succesfully start a postgres cluster in nrt.
I have lost the error from when I tried locally. Sorry, was in bash-at-keyboard mode at that point, and I have zero Docker experience so didn’t try debugging beyond doing the quickstart to install onto my Ubuntu VM then trying LOG_LEVEL=debug flyctl deploy --local-only. Will try again after the current remote build completes (successfully or not).
Edit to add:
The output from a retry on Docker:
=> Building image with Buildpacks
--> docker host: 20.10.13 linux aarch64
Pulling image index.docker.io/heroku/buildpacks:20
20: Pulling from heroku/buildpacks
Status: Image is up to date for heroku/buildpacks:20
Selected run image heroku/pack:20
Pulling image heroku/pack:20
20: Pulling from heroku/pack
Status: Image is up to date for heroku/pack:20
Creating builder with the following buildpacks:
Using build cache volume pack-cache-salaryman_cache-9fcb83ebabe0.build
Running the creator on OS linux with:
Args: /cnb/lifecycle/creator -daemon -launch-cache /launch-cache -log-level debug -app /workspace -cache-dir /cache -run-image heroku/pack:20 -tag registry.fly.io/salaryman:deployment-1648045351 -gid 0 registry.fly.io/salaryman:cache
System Envs: CNB_PLATFORM_API=0.6
Binds: pack-cache-salaryman_cache-9fcb83ebabe0.build:/cache /var/run/docker.sock:/var/run/docker.sock pack-cache-salaryman_cache-9fcb83ebabe0.launch:/launch-cache pack-layers-tcuinbqtnk:/layers pack-app-qdrtnpyufr:/workspace
It appears to hang this point. I have less confidence that my Docker installation is working properly than Fly is; in particular, this is running in a Vagrant machine which may not have great ability to automatically configure networking (due to Apple Mac M1 / VMWare Fusion only-partial-compatibility tomfoolery).
See how long it hangs. Buildpacks generate a whole bunch of intermediate layers without much good output, it wouldn’t surprise me if you get 10+ minutes of it prepping the environment and then continues.
Documenting some of what I’ve learned for the benefit of future users trying to search for keywords:
Fly blows up by default if you are building a Dockerfile locally w/ Vagrant:
This is because you’ll recursively send the .vagrant directory to Docker as context to the Docker daemon, which will get larger, causing the context to get larger, etc etc you will be sad. Also, large Docker contexts means more needs to go over the wire (either to Docker locally or to Fly).
Solution: Use .dockerignore aggressively. Mine excludes the Vagrant VM (necessary) and strongly consider excluding redundant copies of your node dependencies. Because my app pulls in FontAwesome (which weighs in at 400MB), both node_modules and public/packs get very large. Rather than shipping them over the wire to Fly on deploy, I’d rather have Fly grab those dependencies from NPM from its datacenter (which hopefully has better bandwidth than my house) and then cache them using standard Docker behavior.
And here is my Dockerfile, which now works for Rails 6 with Tailscale enabled, if anyone is looking for inspiration in the future. It heavily relies on code cribbed from @joshua’s above but does not explicitly target Nix, simply because I don’t know what that is. (Again, very, very new to Docker here.)