Warning The app is listening at an incorrect address and cannot be reached by fly-proxy.
You can fix this by configuring the app to listen on the following address
- 0.0.0.0:3000
❯❯❯fly deploy
==> Verifying app config
Validating /Users/furukawaeiichi/PF/CREDO_QUEST/fly.toml
Platform: machines
✓ Configuration is valid
--> Verified app config
==> Building image
Remote builder fly-builder-bold-pine-4970 ready
==> Creating build context
--> Creating build context done
==> Building image with Docker
--> docker host: 20.10.12 linux x86_64
[+] Building 0.9s (0/1)
[+] Building 11.0s (22/22) FINISHED
=> [internal] load remote build context 0.0s
=> copy /context / 0.1s
=> resolve image config for docker.io/docker/dockerfile:1 1.8s
=> CACHED docker-image://docker.io/docker/dockerfile:1@sha256:39b85bbfa753 0.0s
=> [internal] load metadata for docker.io/library/ruby:3.2.2-slim 0.5s
=> [base 1/3] FROM docker.io/library/ruby:3.2.2-slim@sha256:506427360ecafe 0.0s
=> => resolve docker.io/library/ruby:3.2.2-slim@sha256:506427360ecafed7853 0.0s
=> CACHED [base 2/3] WORKDIR /rails 0.0s
=> CACHED [base 3/3] RUN gem update --system --no-document && gem inst 0.0s
=> CACHED [build 1/9] RUN apt-get update -qq && apt-get install --no-i 0.0s
=> CACHED [build 2/9] RUN curl -sL https://github.com/nodenv/node-build/ar 0.0s
=> CACHED [build 3/9] COPY --link Gemfile Gemfile.lock ./ 0.0s
=> CACHED [build 4/9] RUN bundle install && bundle exec bootsnap preco 0.0s
=> CACHED [build 5/9] COPY --link package.json package-lock.json ./ 0.0s
=> CACHED [build 6/9] RUN npm install 0.0s
=> [build 7/9] COPY --link . . 0.0s
=> [build 8/9] RUN bundle exec bootsnap precompile app/ lib/ 1.1s
=> [build 9/9] RUN SECRET_KEY_BASE=DUMMY ./bin/rails assets:precompile 2.2s
=> CACHED [stage-2 1/4] RUN apt-get update -qq && apt-get install --no 0.0s
=> CACHED [stage-2 2/4] RUN useradd rails --home /rails --shell /bin/bash 0.0s
=> CACHED [stage-2 3/4] COPY --from=build /usr/local/bundle /usr/local/bun 0.0s
=> [stage-2 4/4] COPY --from=build --chown=rails:rails /rails /rails 1.6s
=> exporting to image 1.3s
=> => exporting layers 1.3s
=> => writing image sha256:5c7d9e09939bc99104f4741a9767b2288bbca75d1354fb3 0.0s
=> => naming to registry.fly.io/credo-quest:deployment-01H16BDZBBK21PN1HJ7 0.0s
--> Building image done
==> Pushing image to fly
The push refers to repository [registry.fly.io/credo-quest]
4091752f25ed: Pushed
f5a9dbf5fdce: Layer already exists
eec85ecff9dd: Layer already exists
0ecf44ba198f: Layer already exists
45c412f5b074: Layer already exists
6bfc3093bd6f: Layer already exists
9c7bb37dd5da: Layer already exists
6c6b038e2f32: Layer already exists
b25744fadbf4: Layer already exists
ed4ba59a6d86: Layer already exists
8cbe4b54fa88: Layer already exists
deployment-01H16BDZBBK21PN1HJ7QC35Z25: digest: sha256:119419100f9e13887ac054d009a165afc52bbb7f6f2bcf718b89e6cc16636d8f size: 2627
--> Pushing image done
image: registry.fly.io/credo-quest:deployment-01H16BDZBBK21PN1HJ7QC35Z25
image size: 325 MB
Watch your app at https://fly.io/apps/credo-quest/monitoring
Updating existing machines in 'credo-quest' with rolling strategy
[1/2] Machine 91857272a44328 [app] update finished: success
WARNING The app is listening on the incorrect address and will not be reachable by fly-proxy.
You can fix this by configuring your app to listen on the following addresses:
- 0.0.0.0:3000
Found these processes inside the machine with open listening sockets:
PROCESS | ADDRESSES
-----------------*--------------------------------------
/.fly/hallpass | [fdaa:2:1661:a7b:ff:9bf6:5e8d:2]:22
[2/2] Machine 148ed5d2b73e38 [app] update finished: success
Finished deploying
Visit your newly deployed app at https://credo-quest.fly.dev/
Displayed.
So, I changed config/puma.rb to
# port ENV.fetch("PORT") { 3000 }
bind "tcp://0.0.0.0:#{ENV['PORT'] || 3000}"
# Puma can serve each request in a thread from an internal thread pool.
# The `threads` method setting takes two numbers: a minimum and maximum.
# Any libraries that use thread pools should be configured to match
# the maximum value specified for Puma. Default is set to 5 threads for minimum
# and maximum; this matches the default thread size of Active Record.
#
max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
threads min_threads_count, max_threads_count
# Specifies the `worker_timeout` threshold that Puma will use to wait before
# terminating a worker in development environments.
#
worker_timeout 3600 if ENV.fetch("RAILS_ENV", "development") == "development"
# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
# port ENV.fetch("PORT") { 3000 }
bind "tcp://0.0.0.0:#{ENV['PORT'] || 3000}"
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the `pidfile` that Puma will use.
pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
# Specifies the number of `workers` to boot in clustered mode.
# Workers are forked web server processes. If using threads and workers together
# the concurrency of the application would be max `threads` * `workers`.
# Workers do not work on JRuby or Windows (both of which do not support
# processes).
#
# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
# Use the `preload_app!` method when specifying a `workers` number.
# This directive tells Puma to first boot the application and load code
# before forking the application. This takes advantage of Copy On Write
# process behavior so workers use less memory.
#
# preload_app!
# Allow puma to be restarted by `bin/rails restart` command.
plugin :tmp_restart
rewritten as.
However, it is not resolved.
I read the following article because it is a similar problem.
What is the problem?
Can you give me some advice on how to solve this problem?
❯❯❯fly status
App
Name = credo-quest
Owner = personal
Hostname = credo-quest.fly.dev
Image = credo-quest:deployment-01H16BDZBBK21PN1HJ7QC35Z25
Platform = machines
Machines
PROCESS ID VERSION REGION STATE CHECKS LAST UPDATED
app 148ed5d2b73e38 41 nrt started 2023-05-24T07:50:58Z
app 91857272a44328 41 nrt started 2023-05-24T07:50:28Z
❯❯❯fly scale show
VM Resources for app: credo-quest
Groups
NAME COUNT KIND CPUS MEMORY REGIONS
app 2 shared 1 256 MB nrt(2)
Thank you kindly for your help.
The following are displayed.
Caused by:
PG::ConnectionBad: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Thank you for the advice.
But the command didn’t work.
I could choose one of two different machines, but both gave me an error.
fly -a credo-quest machine update --autostart --select
? Select a machine: 148ed5d2b73e38 spring-lake-8655 (stopped, region nrt, process group 'app')
Error: no config changes found
fly -a credo-quest machine update --autostart --select
? Select a machine: 91857272a44328 solitary-frog-7806 (stopped, region nrt, process group 'app')
Error: no config changes found