Docker always restart

I’m deploying Node Project to fly.io
I’m using dockerfile with this config

# use slim, not alpine: the alpine ones are smaller but sometimes have issues with dns resolving
FROM node:current-alpine

# it's recommended to run an app as a non-root user. Handily this image comes with a 'node' user
USER node
RUN mkdir -p /home/node/app
WORKDIR /home/node/app

COPY --chown=node:node package.json .
COPY --chown=node:node package-lock.json .

RUN npm ci --only=production

COPY --chown=node:node . .

ENV NODE_ENV="production"

CMD [ "node", "src/main.js" ]

But the docker always restat like this, with no clues

What address is your app listening on? It needs to listen on all addresses like 0.0.0.0:port or [::]:port

Can you share your fly.toml?

This looks like a health check is failing.

This is using fastify project, i’m using :: based on Cannot access web (Fastify) - #2 by greg suggestion

Here my fly.toml

# fly.toml file generated for fragrant-sky-1498 on 2022-07-01T01:11:23+08:00

app = "fragrant-sky-1498"

[env]
  LOG_LEVEL = "error"
  PORT = 8080

[[services]]
  internal_port = 8080
  protocol = "tcp"
  [services.concurrency]
    hard_limit = 50
    soft_limit = 25

  [[services.http_checks]]
    grace_period = "1s"
    interval = 5000
    method = "get"
    path = "/"
    protocol = "http"
    restart_limit = 5
    timeout = 2000
    tls_skip_verify = true

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.ports]]
    handlers = ["http"]
    port = 80

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = 5000
    timeout = 2000

Can you enable more logs from your app? To see incoming requests, for example.

I can confirm this is the health check failing.

Hi, let me know if I can help too. I mean that app should work. It did, anyway, when I last deployed it.

I thought this was resolved so not sure what has since broken :thinking:

Hi, yes its works.
But then I add a simple “register”

fastify.register(healthcheck, {
  prefix: '/api'
});

And then the docker will restart frequently

This when i print

fastify.addHook('onRequest', (request, reply, done) => {
  console.log("Request : ", request.body)
  done()
})

I would like to send you the entire request, but it looks too long, is there any specific things you wanna see?

Request method + URL would be good to see.

Do you have any way to see response status codes for the requests?

And further to @jerome 's answer, is your app using v4 code, but using my v3 sample app as its basis? :thinking: I haven’t tried Fastify v4 yet but I’d certainly suspect they’ve changed stuff in v4. So make sure those functions/routes are valid. As it sounds like when you add one, that causes the app to crash (and hence the healthcheck would indeed fail).

it seems like it always check the base url for the healthcheck, and we need to provide an API for it and need to return 200, isn’t?

I’m also using Fastify here on Fly. In general I just deploy without a Dockerfile and the Buildpack takes care of everything.

When starting Fastify you need to use the process.env.PORT and process.env.HOST env vars. This is almost standard practice when running Node.

await fastify.listen(PORT, HOST);

If you want to use Alpine, I’m using this Dockerfile in production. You could try it, just to discard it’s not that.

FROM node:18-alpine3.15

USER root

WORKDIR /usr/src/app

COPY package.json .
COPY package-lock.json .
RUN npm i --production

COPY . .

ENV NODE_ENV production
ENV PORT 8080
ENV HOST 0.0.0.0

CMD ["node", "index.js"]

Edit:

If you’re using Fastify v4 I think the listen method has changed to:

fastify.listen({port, host}, function (err, address) {
	if (err) {
		fastify.log.error(err);
		process.exit(1);
	}
});

If you want a different path for your health check, you can modify it in your fly.toml. Right now it’s set to path = "/".

If you don’t want to change it, you do need something to respond with a 200 status code at path /.

1 Like