TOML error or Build error?

Hello all,

INFO Preparing to run: `docker-entrypoint.sh serve -s dist -l 80` as root
INFO [fly api proxy] listening at /.fly/api
Error: Cannot find module '/app/serve'

is it serve NPM package? not sure…

I’m encountering a deployment error when building images for my React-Express app using a GitHub Action CI-CD pipeline.
The images build successfully and been pushed to Docker Hub,
but when attempting to deploy to fly.io, a build error occurs related to the client (the second image), which hasn’t started to deploy yet.

I suspect the issue lies with the TOML file - the [processes] imageName .
Additionally, in case it’s a build issue, then i can say:

  • the npm serve package has been installed globally,
  • run as the root user.
  • All files and locations have been validated through the build log.

the error:

Node.js v20.9.0
 INFO Main child exited normally with code: 1
 INFO Starting clean up.
 WARN could not unmount /rootfs: EINVAL: Invalid argument
[    1.783213] reboot: Restarting system
2024-10-30T13:41:51.854035368 [01JBETFRV3V83QRDVVN8ZE241T:main] Running Firecracker v1.7.0
 INFO Starting init (commit: 693c179a)...
 INFO Preparing to run: `docker-entrypoint.sh serve -s dist -l 80` as root
 INFO [fly api proxy] listening at /.fly/api
   Machine started in 1.012s
2024/10/30 13:41:52 INFO SSH listening listen_address=[fdaa:a:ac95:a7b:86:f79e:5bd1:2]:22 dns_server=[fdaa::3]:53
node:internal/modules/cjs/loader:1051
  throw err;
  ^
Error: Cannot find module '/app/serve'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1048:15)
    at Module._load (node:internal/modules/cjs/loader:901:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:83:12)
    at node:internal/main/run_main_module:23:47 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []

this is the ./fly.toml


app = "x-realestate"
primary_region = "fra"


[[vm]]
  memory = "256mb"
  cpu_kind = "shared"
  cpus = 1

[env]
  NODE_ENV = "production"


[processes]
  server = "node --trace-warnings dist/index.js"  # server start command
  client = "serve -s dist -l 80"  # run static server

[[services]]
  processes = ["server"]
  internal_port = 8000
  protocol = "tcp"
  image = "registry.fly.io/x-realestate:server-latest"

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
  [[services.ports]]
    handlers = ["http"]
    port = 80

[[services]]
  processes = ["client"]
  internal_port = 80  # Keep NGINX on port 80 for ngnix
  protocol = "tcp"
  image = "registry.fly.io/x-realestate:client-latest"
  
  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
  [[services.ports]]
    handlers = ["http"]
    port = 80

this is the ./client./Dockerfile


# Production Dockerfile for the client
FROM node:20.9.0 AS build_stage 

# Set the container runtime environment variables (used at build time)
ARG VITE_FIREBASE_API_KEY
ARG VITE_AUTH_DOMAIN
ARG VITE_PROJECT_ID
ARG VITE_STORAGE_BUCKET
ARG VITE_MESSAGING_SENDER_ID
ARG VITE_APP_ID
ARG VITE_NODE_ENV=production
ARG VITE_APP_API_ENDPOINT

ENV VITE_FIREBASE_API_KEY=${VITE_FIREBASE_API_KEY}
ENV VITE_AUTH_DOMAIN=${VITE_AUTH_DOMAIN}
ENV VITE_PROJECT_ID=${VITE_PROJECT_ID}
ENV VITE_STORAGE_BUCKET=${VITE_STORAGE_BUCKET}
ENV VITE_MESSAGING_SENDER_ID=${VITE_MESSAGING_SENDER_ID}
ENV VITE_APP_ID=${VITE_APP_ID}
ENV VITE_NODE_ENV=${VITE_NODE_ENV}
ENV VITE_APP_API_ENDPOINT=${VITE_APP_API_ENDPOINT}

WORKDIR /app

USER root


RUN npm cache clean --force && npm install -g serve --verbose




# Copy package.json and install production dependencies only
COPY ./package*.json ./
COPY ./tsconfig.json ./
COPY ./vite.config.ts ./
COPY . .

# Install dependencies, serve, and build
RUN npm install -g serve \
    && npm install --legacy-peer-deps \
    && npm run build --verbose \
    && echo "current location is: $(pwd)"


# -- END
# Runtime stage
FROM node:20.9.0 AS runtime_stage

WORKDIR /app
USER root

  
COPY ./package*.json ./
COPY ./tsconfig.json ./
COPY ./vite.config.ts ./

COPY --from=build_stage /app/dist /app/dist   
RUN ls /app/dist

# copy the build (dist) and install --only=production
RUN npm install --legacy-peer-deps --only=production \
    && npm install -g serve --verbose



# Expose the default port
EXPOSE 80


# Start the static server
# exec without sh
CMD ["serve", "-s", "dist", "-l", "80"]

this is the ./server/Dockerfile


# Production Dockerfile for the server
FROM node:20.9.0 AS build

# Set build-time arguments (available during build)
ARG MONGO_CONN
ARG DB_NAME
ARG JWT_SECRET
ARG ORIGIN
ARG PORT
ARG NODE_ENV=production
ARG SERVER_DOMAIN

# Set the runtime environment variables
ENV MONGO_CONN=${MONGO_CONN}
ENV DB_NAME=${DB_NAME}
ENV JWT_SECRET=${JWT_SECRET}
ENV ORIGIN=${ORIGIN}
ENV PORT=${PORT}
ENV NODE_ENV=${NODE_ENV}
ENV SERVER_DOMAIN=${SERVER_DOMAIN}

# Set the working directory
WORKDIR /app

COPY ./package*.json ./
# Copy the rest of the project
COPY . .

RUN npm install --omit=dev \
    && npm run build --verbose \
    && ls -alh /app/dist



# Expose port
EXPOSE 8000

# # Healthcheck for the server
# HEALTHCHECK CMD curl --fail http://localhost:${PORT}/health || exit 1

# Start the application

CMD ["node", "--trace-warnings", "dist/index.js"]

From General to App not working

Added duplicated

Hi, I don’t think that’s how multiple processes and dockerfiles works on fly. A single fly.toml shares the same Dockerfile.

hi khuezy and thank you for reply.

if you mean to have multiple TOML files? so, 1 for the client and 1 for the server… then I already have tried it.
it seems like that the TOML file have to be on the root in order to be recognized or perhaps, if there is a place to configure for having multiple TOML files… but I couldn’t find any documentation about it.

Can you describe (in words) what you are trying to do?

Typically with vite, you run two processes in development, but only one in production. See Building for Production | Vite . Here is what a typical dockerfile for a vite project would look like: dockerfile-node/test/frameworks/vite/Dockerfile at main · fly-apps/dockerfile-node · GitHub (this uses nginx, but you can use a different server).

Hello rubys, and thank you for replying,

“Can you describe (in words) what you are trying to do?”

from gh action workflow, i trying to push 2 images from docker-hub, its a client and server side app that i am trying to deploy to 1 website in fly,io.
i am using the fly.toml file to configure the images to the machines.

If they are truly two separate apps, have you considered deploying each to a different fly app?

I’m still trying to understand your architecture:

  • A typical vite app with a backend has a build step that converts the client app to static files and the server app would serve these assets in response to requests, while possibly responding to other request with dynamic responses. In such an application, there would be one fly app and one Dockerfile.
  • If your architecture is one where browser clients access one server which makes requests to another server, you would want to have two Dockerfiles and two apps, where likely the backend server is only available on a private network, either directly via an internal address or via flycast.

Which of these more closely matches your intended architecture? If neither, can you describe in more detail your architecture?

have you considered deploying each to a different fly app?

i prefer to have same origin if possible

this is the architecture: the vite with react.js in 1 container and express in the second container.
the express connect to external database.

if its help this is the docker-compose

version: "3.8"

services:
  server:
    container_name: d-compose-server-prod
    working_dir: /app
    image: ${DOCKER_HUB_PATH}container-server-prod:latest
    build:
      context: ./server
      dockerfile: Dockerfile
    ports:
      - "8000:8000"
    environment:
      MONGO_CONN: ${MONGO_CONN}
      DB_NAME: ${DB_NAME}
      JWT_SECRET: ${JWT_SECRET}
      PORT: 8000
      ORIGIN: http://client
      NODE_ENV: production
      SERVER_DOMAIN: http://server:8000
    volumes:
      - /app/node_modules
    healthcheck:
      test: ["CMD", "curl", "-f", "http://server:8000/health"]
      interval: 10s
      retries: 5
      start_period: 30s
      timeout: 5s
    restart: on-failure:3
    networks:
      - app-network

  client:
    container_name: d-compose-client-prod
    working_dir: /app
    image: ${DOCKER_HUB_PATH}container-client-prod:latest
    build:
      context: ./client
      dockerfile: Dockerfile
    ports:
      - "80:80"
      # - "80:80"  # NGINX will serve on port 80
    environment:
      VITE_FIREBASE_API_KEY: ${VITE_FIREBASE_API_KEY}
      VITE_AUTH_DOMAIN: ${VITE_AUTH_DOMAIN}
      VITE_PROJECT_ID: ${VITE_PROJECT_ID}
      VITE_STORAGE_BUCKET: ${VITE_STORAGE_BUCKET}
      VITE_MESSAGING_SENDER_ID: ${VITE_MESSAGING_SENDER_ID}
      VITE_APP_ID: ${VITE_APP_ID}
      VITE_NODE_ENV: production
      VITE_APP_API_ENDPOINT: ${VITE_APP_API_ENDPOINT}
    volumes:
      - /app/node_modules
    depends_on:
      server:
        condition: service_healthy
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://client || exit 1"]
      interval: 10s
      retries: 5
      start_period: 30s
      timeout: 5s
    networks:
      - app-network


networks:
  app-network:
    driver: bridge

For fly.io, that would mean two fly apps. if the client container reverse proxied requests to the destination, everything would be served from the same origin.

It looks like this could be accomplished by setting VITE_APP_API_ENDPOINT to http://server.internal, replacing server with the name of the fly application for the server.

ok… thank you for the enlightening response… i will have to have a look and try it out.

so: can we conclude, that for fly.io that for each app could have up to max 1 container - right?

Yes, each app is one image ( we don’t actually run containers, we run the image directly in a vm: https://www.youtube.com/watch?v=7iypMRKniPU )

We are actively exploring what it would take to run multiple images on a single machine (and therefore, app), but it is just that – an exploration – at this point, and I don’t have any schedules or outlook to share.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.