Elixir/Phoenix deploy hangs on `Creating build context done`

I took an existing elixir (1.12) and phoenix (1.6.0-rc.1) app and followed the directions in the Elixir getting started guide to set it up for Fly. When I try to deploy it, however, the deployment process just stops at --> Creating build context done. I can’t find anything in the logs and it isn’t producing any errors. It just stops and hangs there until I kill the process.

Any idea what might cause this?

Hi @neurodynamic could you just post the log you got for this anyway? It helps a bit to see the context in which this happened.

And is this happening repeatedly?

Yeah it’s happening repeatedly. I can’t get any logs on the main app, but if I try to get logs for the remote builder, I see this:

2021-10-28T16:47:52.170 runner[80ee4aed] sjc [info] Restarting instance
2021-10-28T16:47:52.172 runner[80ee4aed] sjc [info] Pulling container image
2021-10-28T16:47:52.834 runner[80ee4aed] sjc [info] Unpacking image
2021-10-28T16:47:52.843 runner[80ee4aed] sjc [info] Preparing kernel init
2021-10-28T16:47:52.961 runner[80ee4aed] sjc [info] Setting up volume 'machine_data'
2021-10-28T16:47:53.083 runner[80ee4aed] sjc [info] Configuring firecracker
2021-10-28T16:47:53.498 app[80ee4aed] sjc [info] Running docker-entrypoint.d files
2021-10-28T16:47:53.503 app[80ee4aed] sjc [info] Setting up Docker data directory
2021-10-28T16:47:53.505 app[80ee4aed] sjc [info] Configuring ipv6 for docker
2021-10-28T16:47:53.509 app[80ee4aed] sjc [info] Done setting up docker!
2021-10-28T16:47:53.510 app[80ee4aed] sjc [info] Allowing ipv6 forwarding via sysctl
2021-10-28T16:47:53.512 app[80ee4aed] sjc [info] net.ipv6.conf.default.forwarding = 1
2021-10-28T16:47:53.513 app[80ee4aed] sjc [info] net.ipv6.conf.all.forwarding = 1
2021-10-28T16:47:53.513 app[80ee4aed] sjc [info] General sysctl tweaks
2021-10-28T16:47:53.513 app[80ee4aed] sjc [info] vm.swappiness = 0
2021-10-28T16:47:53.514 app[80ee4aed] sjc [info] vm.dirty_ratio = 6
2021-10-28T16:47:53.514 app[80ee4aed] sjc [info] vm.dirty_background_ratio = 3
2021-10-28T16:47:53.514 app[80ee4aed] sjc [info] net.core.rmem_default = 31457280
2021-10-28T16:47:53.515 app[80ee4aed] sjc [info] net.core.rmem_max = 33554432
2021-10-28T16:47:53.515 app[80ee4aed] sjc [info] net.core.wmem_default = 31457280
2021-10-28T16:47:53.516 app[80ee4aed] sjc [info] net.core.wmem_max = 33554432
2021-10-28T16:47:53.516 app[80ee4aed] sjc [info] net.core.somaxconn = 65535
2021-10-28T16:47:53.517 app[80ee4aed] sjc [info] net.core.netdev_max_backlog = 65536
2021-10-28T16:47:53.517 app[80ee4aed] sjc [info] net.core.optmem_max = 25165824
2021-10-28T16:47:53.518 app[80ee4aed] sjc [info] net.ipv4.tcp_mem = 786432 1048576 26777216
2021-10-28T16:47:53.518 app[80ee4aed] sjc [info] net.ipv4.udp_mem = 65536 131072 262144
2021-10-28T16:47:53.519 app[80ee4aed] sjc [info] net.ipv4.tcp_rmem = 8192 87380 33554432
2021-10-28T16:47:53.519 app[80ee4aed] sjc [info] net.ipv4.udp_rmem_min = 16384
2021-10-28T16:47:53.520 app[80ee4aed] sjc [info] net.ipv4.tcp_wmem = 8192 65536 33554432
2021-10-28T16:47:53.520 app[80ee4aed] sjc [info] net.ipv4.udp_wmem_min = 16384
2021-10-28T16:47:53.521 app[80ee4aed] sjc [info] Running /dockerproxy
2021-10-28T16:47:53.562 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.561744919Z" level=info msg="Build SHA:3bbe2a82a363f2f6339ddda4d3e8bb11b8fe90cd Time:2021-08-26T14:33:46Z"
2021-10-28T16:47:53.843 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.841853586Z" level=info msg="Starting up"
2021-10-28T16:47:53.852 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.852306135Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2021-10-28T16:47:53.852 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.852713900Z" level=debug msg="Listener created for HTTP on unix (/var/run/docker.sock)"
2021-10-28T16:47:53.852 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.852872357Z" level=warning msg="Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network." host="tcp://0.0.0.0:2375"
2021-10-28T16:47:53.852 app[80ee4aed] sjc [info] time="2021-10-28T16:47:53.852920838Z" level=warning msg="Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!" host="tcp://0.0.0.0:2375"
2021-10-28T16:47:54.855 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.853296119Z" level=debug msg="Listener created for HTTP on tcp (0.0.0.0:2375)"
2021-10-28T16:47:54.855 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.853346824Z" level=debug msg="Containerd not running, starting daemon managed containerd"
2021-10-28T16:47:54.859 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.858982858Z" level=info msg="libcontainerd: started new containerd process" pid=620
2021-10-28T16:47:54.860 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.859655109Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2021-10-28T16:47:54.860 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.859672572Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2021-10-28T16:47:54.860 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.859703239Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2021-10-28T16:47:54.860 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.859718067Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2021-10-28T16:47:54.957 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.957196501Z" level=info msg="starting containerd" revision=e25210fe30a0a703442421b0f60afac609f950a3 version=v1.4.9
2021-10-28T16:47:54.989 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.989490426Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
2021-10-28T16:47:54.989 app[80ee4aed] sjc [info] time="2021-10-28T16:47:54.989832347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.003 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.000868051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"ip: can't find device 'aufs'\\nmodprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.003 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.000911262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.003 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.001087523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /data/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.003 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.001100066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.004 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.004447256Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
2021-10-28T16:47:55.004 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.004498873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.004810788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.005259229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.005488369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /data/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.005508677Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.005537301Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
2021-10-28T16:47:55.005 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.005550576Z" level=info msg="metadata content store policy set" policy=shared
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006310491Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006336891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006383739Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006425667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006442329Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006453800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006466183Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006482424Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.006 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006495168Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.007 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006515786Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.007 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006535734Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
2021-10-28T16:47:55.007 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006696475Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
2021-10-28T16:47:55.007 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.006811661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
2021-10-28T16:47:55.008 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.008785254Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
2021-10-28T16:47:55.008 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.008867057Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009435444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009488032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009502981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009515604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009527456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009543677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009559446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
2021-10-28T16:47:55.009 app[80ee4aed] sjc [info] time="2021-10-28T16:47:55.009575336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1

Thanks, that helps. Could you use local Docker builder to move forward while we figure this out? You can do that by adding --local-only

@neurodynamic can you try again with LOG_LEVEL=debug flyctl deploy ... and share the output? The extra logs might help us pinpoint this.

LOG_LEVEL=debug flyctl deploy
DEBUG Loaded flyctl config from ~/.fly/config.yml
DEBUG Checking for updates...
DEBUG Loading app config from ~/Developer/my_app/fly.toml
DEBUG Working Directory: ~/Developer/my_app
DEBUG App Config File: ~/Developer/my_app/fly.toml
Deploying personal-site
==> Validating app configuration
DEBUG --> POST https://api.fly.io/graphql {{"query":"query($appName: String!, $definition: JSON!) { app(name: $appName) { parseConfig(definition: $definition) { definition valid errors services { description } } } }","variables":{"appName":"personal-site","definition":{"deploy":{"release_command":"/app/bin/hello_elixir eval HelloElixir.Release.migrate"},"env":{},"experimental":{"allowed_public_ports":[],"auto_rollback":true},"kill_signal":"SIGTERM","kill_timeout":5,"processes":[],"services":[{"concurrency":{"hard_limit":25,"soft_limit":20,"type":"connections"},"http_checks":[],"internal_port":4000,"ports":[{"handlers":["http"],"port":80},{"handlers":["tls","http"],"port":443}],"processes":["app"],"protocol":"tcp","script_checks":[],"tcp_checks":[{"grace_period":"30s","interval":"15s","restart_limit":6,"timeout":"2s"}]}]}}}
}
DEBUG <-- 200 https://api.fly.io/graphql (135.59ms) {"data":{"app":{"parseConfig":{"definition":{"kill_timeout":5,"kill_signal":"SIGTERM","processes":[],"deploy":{"release_command":"/app/bin/hello_elixir eval HelloElixir.Release.migrate"},"experimental":{"allowed_public_ports":[],"auto_rollback":true},"services":[{"processes":["app"],"protocol":"tcp","internal_port":4000,"concurrency":{"soft_limit":20,"hard_limit":25,"type":"connections"},"ports":[{"port":80,"handlers":["http"]},{"port":443,"handlers":["tls","http"]}],"tcp_checks":[{"interval":"15s","timeout":"2s","grace_period":"30s","restart_limit":6}],"http_checks":[],"script_checks":[]}],"env":{}},"valid":true,"errors":[],"services":[{"description":"TCP 80/443 ⇢ 4000"}]}}}}
--> Validating app configuration done
Services
TCP 80/443 ⇢ 4000
DEBUG trying local docker daemon
DEBUG Local docker daemon unavailable
DEBUG trying remote docker daemon
DEBUG Trying 'Buildpacks' strategy
DEBUG no buildpack builder configured, skipping
DEBUG result image:<nil> error:<nil>
DEBUG Trying 'Dockerfile' strategy
DEBUG --> POST https://api.fly.io/graphql {{"query":"mutation($input: EnsureMachineRemoteBuilderInput!) { ensureMachineRemoteBuilder(input: $input) { machine { id state ips { nodes { family kind ip } } }, app { name organization { slug } } } }","variables":{"input":{"appName":"personal-site","organizationId":null}}}
}
DEBUG <-- 200 https://api.fly.io/graphql (264.03ms) {"data":{"ensureMachineRemoteBuilder":{"machine":{"id":"80ee4aed","state":"starting","ips":{"nodes":[{"family":"v6","kind":"privatenet","ip":"fdaa:0:3728:a7b:ad0:0:4cac:2"},{"family":"v6","kind":"public","ip":"2604:1380:45e1:3002::4cac:3"},{"family":"v4","kind":"private","ip":"172.19.1.146"}]}},"app":{"name":"fly-builder-morning-pine-5931","organization":{"slug":"personal"}}}}}
DEBUG checking ip &{Family:v6 Kind:privatenet IP:fdaa:0:3728:a7b:ad0:0:4cac:2 MaskSize:0}
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣾ DEBUG --> POST https://api.fly.io/graphql {{"query":"query ($appName: String!) { app(name: $appName) { id name hostname deployed status version appUrl organization { id slug } services { description protocol internalPort ports { port handlers } } ipAddresses { nodes { id address type createdAt } } } }","variables":{"appName":"personal-site"}}
}
DEBUG <-- 200 https://api.fly.io/graphql (232.56ms) {"data":{"app":{"id":"personal-site","name":"personal-site","hostname":"personal-site.fly.dev","deployed":false,"status":"pending","version":0,"appUrl":null,"organization":{"id":"wwRXKpJ9nLRo8tBMBwqQopOqQRTn2o","slug":"personal"},"services":[{"description":"TCP 80/443 ⇢ 8080","protocol":"TCP","internalPort":8080,"ports":[{"port":80,"handlers":["HTTP"]},{"port":443,"handlers":["TLS","HTTP"]}]}],"ipAddresses":{"nodes":[]}}}}
DEBUG --> POST https://api.fly.io/graphql {{"query":"mutation($input: ValidateWireGuardPeersInput!) { validateWireGuardPeers(input: $input) { invalidPeerIps } }","variables":{"input":{"peerIps":["fdaa:0:3728:a7b:d0e:0:a:2"]}}}
}
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣽ DEBUG <-- 200 https://api.fly.io/graphql (140.1ms) {"data":{"validateWireGuardPeers":{"invalidPeerIps":[]}}}
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣯ DEBUG Remote builder unavailable, retrying in 200ms (err: error during connect: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/_ping": err err handling connect: connection failed: connect tcp [fdaa:0:3728:a7b:ad0:0:4cac:2]:2375: connection was refused)
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣷ DEBUG Remote builder unavailable, retrying in 238.170614ms (err: error during connect: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/_ping": err err handling connect: connection failed: connect tcp [fdaa:0:3728:a7b:ad0:0:4cac:2]:2375: connection was refused)
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣾ DEBUG Remote builder unavailable, retrying in 256.968098ms (err: error during connect: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/_ping": err err handling connect: connection failed: connect tcp [fdaa:0:3728:a7b:ad0:0:4cac:2]:2375: connection was refused)
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣽ DEBUG Remote builder unavailable, retrying in 206.024858ms (err: error during connect: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/_ping": err err handling connect: connection failed: connect tcp [fdaa:0:3728:a7b:ad0:0:4cac:2]:2375: connection was refused)
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣻ DEBUG Remote builder unavailable, retrying in 295.07379ms (err: error during connect: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/_ping": err err handling connect: connection failed: connect tcp [fdaa:0:3728:a7b:ad0:0:4cac:2]:2375: connection was refused)
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣷ DEBUG Remote builder available, but pinging again in 200ms to be sure
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣾ DEBUG Remote builder available, but pinging again in 236.38639ms to be sure
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣽ DEBUG Remote builder available, but pinging again in 280.587265ms to be sure
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⣻ DEBUG Remote builder available, but pinging again in 334.272754ms to be sure
Waiting for remote builder fly-builder-morning-pine-5931... connecting ⡿ DEBUG Remote builder is ready to build!
Remote builder fly-builder-morning-pine-5931 ready
==> Creating build context
--> Creating build context done
1 Like

@michael the step after this is serverInfo, err := docker.Info(ctx)… thinking a registry timeout, not sure if that makes sense.

That’s a request to fetch info from the docker daemon, which shouldn’t hang unless there’s an issue. I’m going to add a debug statement after that line so see if it’s the issue then do a release we can try.

1 Like

@neurodynamic when you have a moment, can you update flyctl (should be version 0.0.250+) and try LOG_LEVEL=debug flyctl deploy ... again? There’s more debug output now and better error handling that will surface an error instead of failing silently.

I have updated flyctl. I was also working on getting things to work with --local-only in the interim. The very first error I had when using --local-only was that git wasn’t installed in the Docker container, and when mix tried to install a hex package via that package’s master branch on GitHub (instead of by a package version number), it wasn’t finding the git executable. Since that’s the first error I encountered with --local-only, I think there’s a decent chance it was the issue with the deploy as well.

That said, now after a few fixes I’m seeing this in the deploy output for both --local-only and the regular deploy process:

	 Preparing to run: `/app/bin/my_app eval MyApp.Release.migrate` as nobody
	 2021/10/28 22:43:12 listening on [fdaa:0:3728:a7b:2295:9c64:b287:2]:22 (DNS: [fdaa::3]:53)
	 ** (EXIT from #PID<0.94.0>) an exception was raised:
	     ** (RuntimeError) SSL connection can not be established because `:ssl` application is not started,
	 you can add it to `extra_applications` in your `mix.exs`:
	   def application do
	     [extra_applications: [:ssl]]
	   end
	         (postgrex 0.15.10) lib/postgrex.ex:606: Postgrex.ensure_deps_started!/1
	         (postgrex 0.15.10) lib/postgrex.ex:548: Postgrex.child_spec/1
	         (ecto_sql 3.7.0) lib/ecto/adapters/sql.ex:609: Ecto.Adapters.SQL.init/3
	         (ecto 3.7.1) lib/ecto/repo/supervisor.ex:172: Ecto.Repo.Supervisor.init/1
	         (stdlib 3.15) supervisor.erl:330: :supervisor.init/1
	         (stdlib 3.15) gen_server.erl:423: :gen_server.init_it/2
	         (stdlib 3.15) gen_server.erl:390: :gen_server.init_it/6
	         (stdlib 3.15) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
	 Main child exited normally with code: 1
	 Reaped child process with pid: 561 and signal: SIGUSR1, core dumped? false
	 Starting clean up.

Error Release command failed, deployment aborted

This fix does not seem to have any effect, and adding :ssl to extra_applications also does not seem to have any effect.

This fix seemed to get me one step further. Now the error I’m seeing is this:

	 Preparing to run: `/app/bin/my_app eval MyApp.Release.migrate` as nobody
	 2021/10/29 02:42:20 listening on [fdaa:0:3728:a7b:ad1:ad4a:fbc5:2]:22 (DNS: [fdaa::3]:53)
	 Reaped child process with pid: 561 and signal: SIGUSR1, core dumped? false
	 02:42:21.967 [error] Postgrex.Protocol (#PID<0.161.0>) failed to connect: ** (Postgrex.Error) ssl not available
	 02:42:21.967 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) ssl not available
	 02:42:23.238 [error] Postgrex.Protocol (#PID<0.162.0>) failed to connect: ** (Postgrex.Error) ssl not available
	 02:42:23.927 [error] Could not create schema migrations table. This error usually happens due to the following:
	   * The database does not exist
	   * The "schema_migrations" table, which Ecto uses for managing
	     migrations, was defined by another library
	   * There is a deadlock while migrating (such as using concurrent
	     indexes with a migration_lock)
	 To fix the first issue, run "mix ecto.create".
	 To address the second, you can run "mix ecto.drop" followed by
	 "mix ecto.create". Alternatively you may configure Ecto to use
	 another table and/or repository for managing migrations:
	     config :my_app, MyApp.Repo,
	       migration_source: "some_other_table_for_schema_migrations",
	       migration_repo: AnotherRepoForSchemaMigrations
	 The full error report is shown below.
	 ** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2979ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
	   1. Ensuring your database is available and that you can connect to it
	   2. Tracking down slow queries and making sure they are running fast enough
	   3. Increasing the pool_size (albeit it increases resource consumption)
	   4. Allowing requests to wait longer by increasing :queue_target and :queue_interval
	 See DBConnection.start_link/2 for more information
	     (ecto_sql 3.7.0) lib/ecto/adapters/sql.ex:756: Ecto.Adapters.SQL.raise_sql_call_error/1
	     (elixir 1.12.1) lib/enum.ex:1553: Enum."-map/2-lists^map/1-0-"/2
	     (ecto_sql 3.7.0) lib/ecto/adapters/sql.ex:844: Ecto.Adapters.SQL.execute_ddl/4
	     (ecto_sql 3.7.0) lib/ecto/migrator.ex:645: Ecto.Migrator.verbose_schema_migration/3
	     (ecto_sql 3.7.0) lib/ecto/migrator.ex:473: Ecto.Migrator.lock_for_migrations/4
	     (ecto_sql 3.7.0) lib/ecto/migrator.ex:388: Ecto.Migrator.run/4
	     (ecto_sql 3.7.0) lib/ecto/migrator.ex:146: Ecto.Migrator.with_repo/3
	     (my_app 0.1.0) lib/my_app/release.ex:13: anonymous fn/2 in MyApp.Release.migrate/0
	 Main child exited normally with code: 1
	 Reaped child process with pid: 563 and signal: SIGUSR1, core dumped? false
	 Starting clean up.

The connection to Fly hosted postgres should not use SSL. Can you paste your Repo config here? It should look something like this (in runtime.exs). Note that SSL isn’t enabled and :inet6 is:

  database_url =
    System.get_env("DATABASE_URL") ||
      raise """
      environment variable DATABASE_URL is missing.
      For example: ecto://USER:PASS@HOST/DATABASE
      """

  config :hello_elixir, HelloElixir.Repo,
    url: database_url,
    # IMPORTANT: Or it won't find the DB server
    socket_options: [:inet6],
    pool_size: String.to_integer(System.get_env("POOL_SIZE") || "

Here it is:

if config_env() == :prod do
  database_url =
    System.get_env("DATABASE_URL") ||
      raise """
      environment variable DATABASE_URL is missing.
      For example: ecto://USER:PASS@HOST/DATABASE
      """

  config :my_app, MyApp.Repo,
    # ssl: true,
    socket_options: [:inet6],
    url: database_url,
    pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10")

OH! But I still had SSL on for the Repo in prod.exs. Turning it off there seems to have worked!

1 Like

It’s working now; thanks everyone!!

1 Like

Hm. Maybe the time it worked was a fluke. Now when I try to deploy (flyctl v0.0.250 darwin/amd64 Commit: 7a90db9 BuildDate: 2021-10-28T20:49:47Z), I’m getting the same hang as before, but with some output (I assume this output is what was added in 0.0.249?)

Deploying my-app
==> Validating app configuration
--> Validating app configuration done
Services
TCP 80/443 ⇢ 4000
Remote builder fly-builder-morning-pine-5931 ready
==> Creating build context
--> Creating build context done

Error error fetching docker server info: Get "http://[fdaa:0:3728:a7b:ad0:0:4cac:2]:2375/v1.41/info": context deadline exceeded

Yeah, we did add this log in the latest flyctl to find out what it was hanging on. Now we know :slight_smile: @michael

Can I suggest shifting to --local-only if this keeps happening? We’ll figure out what’s up with this fetch.

1 Like

Can do.

No pressure, I know this probably isn’t strictly a flyctl problem, but if you happen to know off the top of your head, I’m pretty new to Docker and when I’m doing the --local-only builds, I’ve found that the Docker.raw file on my Mac ends up taking up about 20gb of space after a few builds. Is this normal and/or is there a way to reduce the amount of space it takes up?

Yeah I think the docker builder prune | Docker Documentation command clears out the cache and extra layers.

Let me know if that doesn’t work? I think there’s another option on the GUI as well, I can search if this doesn’t handle it.

1 Like

That does seem to work; thank you!

1 Like