Issues with deploy.release_command

I may be doing something wrong, but no matter what I put in fly.toml for deploy.release_command, nothing shows up in the logs.

Info:

fly 0.296
app name: summer-paper-8359

config:

app = "summer-paper-8359"

kill_signal = "SIGINT"
kill_timeout = 5
processes = []

[env]

[experimental]
  allowed_public_ports = []
  auto_rollback = true

[[services]]
  http_checks = []
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    handlers = ["http"]
    port = 80

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

[deploy]
  release_command = "ls"

fly logs output:

2022-02-10T17:59:41Z app[f6c5f06a] iad [info]Starting init (commit: 0c50bff)...
2022-02-10T17:59:41Z app[f6c5f06a] iad [info]Preparing to run: `/bin/sh -lc ls` as root
2022-02-10T17:59:41Z app[f6c5f06a] iad [info]2022/02/10 17:59:41 listening on [fdaa:0:4367:a7b:21e0:f6c5:f06a:2]:22 (DNS: [fdaa::3]:53)
2022-02-10T17:59:42Z app[f6c5f06a] iad [info]Main child exited normally with code: 0
2022-02-10T17:59:42Z app[f6c5f06a] iad [info]Starting clean up.

Note that I put a simple ls command there. But not matter what I put, I don’t get output.

Am I missing something?

(Separately I’ll be reporting an issue with fly ssh console --command <foo> not respecting WORKDIR in Dockerfile. But I don’t think they’re related because ls is globally available.

Thanks!

Can you post your Dockerfile?

Sure! It’s a two stage build, first it gets everything with nix, then copies everything into a clean image.

Note: running “ls” works just fine if I ssh into the vm.

FROM nixos/nix as builder

RUN mkdir /app
WORKDIR /app

COPY shell.nix shell.nix

RUN mkdir -p /output/store
RUN nix-env -f shell.nix -i -A buildInputs
RUN nix-env -f shell.nix -i -A dependencies --profile /output/profile
RUN cp -va $(nix-store -qR /output/profile) /output/store

COPY requirements.txt requirements.txt

RUN virtualenv .venv && .venv/bin/pip install -r requirements.txt && mkdir .venv/static

COPY package.json .
COPY yarn.lock .

RUN yarn

# RUN rm -rf node_modules/reactivated/*
# COPY node_modules/reactivated node_modules/reactivated

COPY .babelrc.json .
COPY manage.py .
COPY server server
# COPY static static
COPY client client
COPY tsconfig.json .

RUN .venv/bin/python manage.py generate_client_assets
RUN .venv/bin/python manage.py build
RUN .venv/bin/python manage.py collectstatic --no-input
RUN rm static/dist/*.map


FROM alpine

# Nix package is very heavy and includes the full DB.
RUN apk add postgresql-client

COPY --from=builder /output/store /nix/store
COPY --from=builder /output/profile/ /usr/local/

RUN mkdir /app
WORKDIR /app

ENV NODE_ENV production

COPY requirements.txt requirements.txt
RUN virtualenv .venv && .venv/bin/pip install -r requirements.txt && mkdir .venv/static

COPY manage.py .
COPY server server

RUN mkdir -p node_modules/.bin/
COPY --from=builder /app/node_modules/.bin/renderer.js node_modules/.bin/
COPY --from=builder /app/node_modules/.bin/renderer.js.map node_modules/.bin/
COPY --from=builder /app/static static

ENV PYTHONUNBUFFERED 1
ENV PATH="/app/.venv/bin:$PATH"
ENV ENVIRONMENT=production
RUN rm server/settings/__init__.py && echo 'export DJANGO_SETTINGS_MODULE=server.settings.$ENVIRONMENT' > /etc/profile
ENTRYPOINT ["/bin/sh", "-lc"]
# SSH commands are weird with fly for now, so we use this dirty script at the root level.
RUN echo "source /etc/profile; cd /app; python manage.py migrate" > /migrate.sh && chmod +x /migrate.sh

CMD ["gunicorn server.wsgi --forwarded-allow-ips='*' --bind 0.0.0.0:8080 --workers 1 --preload --timeout 90"]

@Joshua : any ideas?

Nothing stands out to me here, exept perhaps that manually setting the ENTRYPOINT could be related.