Machine doesn't start stays at waiting for machine to be reachable

Hi, I’m having an issue when deploying, my server machine is never starting, it just stays at “waiting for machine to be reachable on 0.0.0.0:3001” and then stops after a few tries. In my local the server works, but on fly it never starts. Logs don’t say the cause of the issue and I can’t ssh either if the machine doesn’t start. At some point I thought this was related to missing some env variables, I added them and did a clean deploy but it still doesn’t start.

Fly.toml:

app = ‘servername’
primary_region = ‘mia’

[build]

[http_service]
internal_port = 3001
force_https = true
auto_stop_machines = ‘stop’
auto_start_machines = true
min_machines_running = 1
processes = [‘app’]

[[vm]]
memory = ‘1gb’
cpu_kind = ‘shared’
cpus = 1

Dockerfile:

# NOTE: Why do we specify alpine version here?
#   Because if not, we had situations where it would use the different version
#   locally and on Github CI. This way we ensure exact version is used,
#   and also have control over updating it (instead of update surprising us).
FROM node:18.18.0-alpine3.17 AS node


# We split Dockerfile into base, server-builder and server-production.
# This way we have separate situations -> in server-builder we build all
# we need to run the server, and then in server-production we start fresh
# and just copy what we need from server-builder, avoiding intermediate
# artifacts and any settings / pollution we don't need in production
# but only for building.


FROM node AS base
RUN apk --no-cache -U upgrade # To ensure any potential security patches are applied.


# Todo: The 'server-builder' image stays on disk under <none>:<none> and is
# relatively large (~900 MB), should we remove it? Or is it useful for future
# builds?
FROM base AS server-builder
# Building the Docker image on Apple's Silicon Mac fails without python3 (the build
# throws `node-gyp` errors when it tries to compile native deps. Installing
# `python3` fixes the issue.
RUN apk add --no-cache python3 build-base libtool autoconf automake
WORKDIR /app
# Since the framwork code in /.wasp/build/server imports the user code in /src
# using relative imports, we must mirror the same directory structure in the
# Docker image.
COPY src ./src
COPY package.json .
COPY package-lock.json .
COPY server .wasp/build/server
COPY sdk .wasp/out/sdk
# Install npm packages, resulting in node_modules/.
RUN npm install && cd .wasp/build/server && npm install
COPY db/schema.prisma .wasp/build/db/
RUN cd .wasp/build/server && npx prisma generate --schema='../db/schema.prisma'
# Building the server should come after Prisma generation.
RUN cd .wasp/build/server && npm run bundle


# TODO: Use pm2?
# TODO: Use non-root user (node).
FROM base AS server-production
# In case they want to use python3 in their app.
RUN apk add --no-cache python3
ENV NODE_ENV production
WORKDIR /app
# Copying the top level 'node_modules' because it contains the Prisma packages
# necessary for migrating the database.
COPY --from=server-builder /app/node_modules ./node_modules
# Copying the SDK because 'validate-env.mjs' executes independent of the bundle
# and references the 'wasp' package.
COPY --from=server-builder /app/.wasp/out/sdk .wasp/out/sdk
# Copying 'server/node_modules' because 'validate-env.mjs' executes independent
# of the bundle and references the dotenv package.
COPY --from=server-builder /app/.wasp/build/server/node_modules .wasp/build/server/node_modules
COPY --from=server-builder /app/.wasp/build/server/bundle .wasp/build/server/bundle
COPY --from=server-builder /app/.wasp/build/server/package*.json .wasp/build/server/
COPY --from=server-builder /app/.wasp/build/server/scripts .wasp/build/server/scripts
COPY db/ .wasp/build/db/
EXPOSE ${PORT}
WORKDIR /app/.wasp/build/server
ENTRYPOINT ["npm", "run", "start-production"]


# Any user-defined Dockerfile contents will be appended below.
# NOTE: Why do we specify alpine version here?
#   Because if not, we had situations where it would use the different version
#   locally and on Github CI. This way we ensure exact version is used,
#   and also have control over updating it (instead of update surprising us).
FROM node:18-bullseye AS node

Server running in local:

Why do you have that last FROM in your dockerfile?

1 Like

I’m using wasp so they auto generate a Dockerfile and if I want to add a custom configuration I can append it at the end. I think that the latest docker statements have priority over the others. I also tried deploying manually and modifying the generated Dockerfile directly but this also didn’t work.

I don’t think that’s how Dockerfile works, each FROM block is a multibuild step that takes an image from that base image.

I tried again manually with my own Dockerfile with the bullseye node version on top and building correctly but I still have the issue with the machine not starting

did u bind your host to 0.0.0.0 or [::]?

Sorry where can I see that?

your server usually has some place to bind the host/port. It could also be an env variable, depending on the framework

Its next.js, the port is set in an env variable PORT=3001 in fly.io, does it seem correct?

by default node:http server is localhost, you need to config it to 0.0.0.0 or [::] if you want to do intracomms.

Where can I configure that? I just assummed that wasp (a framework I’m using) does that automatically according to the docs

Look at your code above, you’re using http.createServer.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.