Deployment Failed using Litefs, start.sh, Sqlite, Cachified, etc on Remix

Hi everyone,

I have an issue with the deployment. Here are some information,

Github Action Log

WARNING The app is not listening on the expected address and will not be reachable by fly-proxy.
You can fix this by configuring your app to listen on the following addresses:
  - 0.0.0.0:8080
Found these processes inside the machine with open listening sockets:
  PROCESS                                         | ADDRESSES                            
--------------------------------------------------*--------------------------------------
  /.fly/hallpass                                  | [fdaa:1:c269:a7b:81:3124:3f21:2]:22  
  node /myapp/node_modules/.bin/remix-serve build | [::]:8081                            

Error: timeout reached waiting for healthchecks to pass for machine 3287dd0a0d9785 failed to get VM 3287dd0a0d9785: Get "https://api.machines.dev/v1/apps/hanihusam-com/machines/3287dd0a0d9785": net/http: request canceled

Fly logs

2023-06-27T12:59:30.802 app[3287dd0a0d9785] sin [info] All migrations have been successfully applied.
2023-06-27T12:59:30.832 app[3287dd0a0d9785] sin [info] npm notice
2023-06-27T12:59:30.832 app[3287dd0a0d9785] sin [info] npm notice New minor version of npm available! 9.5.1 -> 9.7.2
2023-06-27T12:59:30.832 app[3287dd0a0d9785] sin [info] npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.7.2>
2023-06-27T12:59:30.832 app[3287dd0a0d9785] sin [info] npm notice Run `npm install -g npm@9.7.2` to update!
2023-06-27T12:59:30.832 app[3287dd0a0d9785] sin [info] npm notice
2023-06-27T12:59:31.261 app[3287dd0a0d9785] sin [info] > hanihusam.com@1.0.0 start
2023-06-27T12:59:31.261 app[3287dd0a0d9785] sin [info] > remix-serve build
2023-06-27T12:59:32.370 app[3287dd0a0d9785] sin [info] Remix App Server started at http://localhost:8081 (http://172.19.132.178:8081)
2023-06-27T12:59:33.249 app[3287dd0a0d9785] sin [info] WARN Reaped child process with pid: 319, exit code: 0

Docker

# base node image
FROM node:18-bullseye-slim as base

# set for base and all layer that inherit from it
ENV NODE_ENV production

# Install openssl for Prisma
RUN apt-get update && apt-get install -y fuse3 openssl sqlite3 ca-certificates

# Install all node_modules, including dev dependencies
FROM base as deps

WORKDIR /myapp

ADD package.json .npmrc package-lock.json ./
RUN npm install --include=dev

# Setup production node_modules
FROM base as production-deps

WORKDIR /myapp

COPY --from=deps /myapp/node_modules /myapp/node_modules
ADD package.json .npmrc package-lock.json ./
RUN npm prune --omit=dev

# Build the app
FROM base as build

WORKDIR /myapp

COPY --from=deps /myapp/node_modules /myapp/node_modules

ADD prisma .
RUN npx prisma generate

ADD . .
RUN npm run build

# Finally, build the production image with minimal footprint
FROM base

ENV FLY="true"
ENV LITEFS_DIR="/litefs/data"
ENV DATABASE_FILENAME="sqlite.db"
ENV DATABASE_PATH="$LITEFS_DIR/$DATABASE_FILENAME"
ENV DATABASE_URL="file:$DATABASE_PATH"
ENV CACHE_DATABASE_FILENAME="cache.db"
ENV CACHE_DATABASE_PATH="/$LITEFS_DIR/$CACHE_DATABASE_FILENAME"
ENV INTERNAL_PORT="8080"
ENV PORT="8081"
ENV NODE_ENV="production"

# Make SQLite CLI accessible
RUN echo "#!/bin/sh\nset -x\nsqlite3 \$DATABASE_URL" > /usr/local/bin/database-cli && chmod +x /usr/local/bin/database-cli
RUN echo "#!/bin/sh\nset -x\nsqlite3 \$CACHE_DATABASE_PATH" > /usr/local/bin/cache-database-cli && chmod +x /usr/local/bin/cache-database-cli

WORKDIR /myapp

COPY --from=production-deps /myapp/node_modules /myapp/node_modules
COPY --from=build /myapp/node_modules/.prisma /myapp/node_modules/.prisma
COPY --from=build /myapp/build /myapp/build
COPY --from=build /myapp/public /myapp/public
COPY --from=build /myapp/package.json /myapp/package.json
COPY --from=build /myapp/start.sh /myapp/start.sh
COPY --from=build /myapp/prisma /myapp/prisma

# prepare for litefs
COPY --from=flyio/litefs:0.4.0 /usr/local/bin/litefs /usr/local/bin/litefs
ADD other/litefs.yml /etc/litefs.yml
RUN mkdir -p /data ${LITEFS_DIR}

ADD . .

ENTRYPOINT [ "./start.sh" ]
CMD ["litefs", "mount"]

fly.toml

app = "hanihusam-com"
primary_region = "sin"

kill_signal = "SIGINT"
kill_timeout = 5
processes = []

[experimental]
  allowed_public_ports = []
  auto_rollback = true
  cmd = "start.sh"
  entrypoint = "sh"

[mounts]
  source = "data"
  destination = "/data"

[[services]]
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    handlers = ["http"]
    port = 80
    force_https = true

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

  [[services.http_checks]]
    interval = "10s"
    grace_period = "5s"
    method = "get"
    path = "/healthcheck"
    protocol = "http"
    timeout = "2s"
    tls_skip_verify = false
    headers = { }

litefs.yml

# Documented example: https://github.com/superfly/litefs/blob/dec5a7353292068b830001bd2df4830e646f6a2f/cmd/litefs/etc/litefs.yml
fuse:
  # Required. This is the mount directory that applications will
  # use to access their SQLite databases.
  dir: "${LITEFS_DIR}"

data:
  # Path to internal data storage.
  dir: "/data/litefs"

proxy:
  # matches the internal_port in fly.toml
  addr: ":${INTERNAL_PORT}"
  target: "localhost:${PORT}"
  db: "${DATABASE_FILENAME}"

# The lease section specifies how the cluster will be managed. We're using the
# "consul" lease type so that our application can dynamically change the primary.
#
# These environment variables will be available in your Fly.io application.
lease:
  type: "consul"
  candidate: ${FLY_REGION == PRIMARY_REGION}
  promote: true
  advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"

  consul:
    url: "${FLY_CONSUL_URL}"
    key: "litefs/${FLY_APP_NAME}"

start.sh

#!/bin/sh -ex

# This file is how Fly starts the server (configured in fly.toml). Before starting
# the server though, we need to run any prisma migrations that haven't yet been
# run, which is why this file exists in the first place.
# Learn more: https://community.fly.io/t/sqlite-not-getting-setup-properly/4386

# allocate swap space
fallocate -l 512M /swapfile
chmod 0600 /swapfile
mkswap /swapfile
echo 10 > /proc/sys/vm/swappiness
swapon /swapfile
echo 1 > /proc/sys/vm/overcommit_memory

npx prisma migrate deploy
npm run start

Could anyone help me why it happens? If there is any information you need just ping me.
Thanks

Github Repo: GitHub - hanihusam/hanihusam.com: (Work in Progress) Han's personal website

Hi @hanihusam

If you have both ENTRYPOINT and CMD, they get combined into a single command. What gets actually executed on machine start is:

./start.sh litefs mount

So LiteFS never actually runs and nothing is listening in port 8080. It might be better to use LiteFS process supervisor and let it start Remix for you: Running as a supervisor

Oh, I see. Thanks for the information.

I try to change my files into this,

litefs.yml

# Documented example: https://github.com/superfly/litefs/blob/dec5a7353292068b830001bd2df4830e646f6a2f/cmd/litefs/etc/litefs.yml
fuse:
  # Required. This is the mount directory that applications will
  # use to access their SQLite databases.
  dir: "${LITEFS_DIR}"

data:
  # Path to internal data storage.
  dir: "/data/litefs"

proxy:
  # matches the internal_port in fly.toml
  addr: ":${INTERNAL_PORT}"
  target: "localhost:${PORT}"
  db: "${DATABASE_FILENAME}"

# The lease section specifies how the cluster will be managed. We're using the
# "consul" lease type so that our application can dynamically change the primary.
#
# These environment variables will be available in your Fly.io application.
lease:
  type: "consul"
  candidate: ${FLY_REGION == PRIMARY_REGION}
  promote: true
  advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"

  consul:
    url: "${FLY_CONSUL_URL}"
    key: "litefs/${FLY_APP_NAME}"

exec:
  - cmd: ./start.sh

fly.toml

app = "hanihusam-com"
primary_region = "sin"

kill_signal = "SIGINT"
kill_timeout = 5
processes = []

[experimental]
  allowed_public_ports = []
  auto_rollback = true

[mounts]
  source = "data"
  destination = "/data"

[[services]]
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    handlers = ["http"]
    port = 80
    force_https = true

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

  [[services.http_checks]]
    interval = "10s"
    grace_period = "5s"
    method = "get"
    path = "/healthcheck"
    protocol = "http"
    timeout = "2s"
    tls_skip_verify = false
    headers = { }

Dockerfile

...
# remove the entrypoint
CMD ["litefs", "mount"]

Then I got this now,

ERROR: cannot exec: background cmd: cannot start exec command: fork/exec ./start.sh: permission denied

What do I miss?

It looks like start.sh is missing executable permissions.

What should I add to give it permission?

I think just doing chmod +x start.sh is enough.

Where should I put that cmd? I’ve tried run it over fly ssh console, then tried to deploy again, it still get the same error.

Just on your local machine once. It will set executable permission on the file which is retained when the file is copied into the docker image.
If you are on Windows and can’t do this on your local machine, you can add RUN chmod +x start.sh to the Dockerfile to set the executable permission during build.

Thanks! It works.

However, I am still facing other an error that is on my healtcheck.

The table `main.ContentMeta` does not exist in the current database.

Do I miss the order? As far as I know I’ve run the prisma migrate above.

The order looks correct to me. Looking at the logs, the migrations seem to have been successfully applied.

I don’t have any experience with Remix, but this line looks suspicious to me:

Datasource “db”: SQLite database “sqlite.db” at “file:./sqlite.db?connection_limit=1”

Shouldn’t the database be under /litefs/data?

1 Like

You’re right!
I fixed the env var for the database and it works now. Thanks a lot!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.