Error: failed to launch VM: invalid config.guest.memory_mb, minimum required 512 MiB

I wanted to test deploying deno to fly.io and encountered minimum required memory error. Where is this limitation configured? I’ve never encountered this issue with nodejs projects, let alone a bare bone project. Is fly.io detecting that Deno takes up that much more memory than nodejs?

Creating green machines

Rolling back failed deployment
Error: failed to launch VM: invalid config.guest.memory_mb, minimum required 512 MiB

fly.toml


app = 'deno-test'
primary_region = 'nrt'
swap_size_mb = 512

[build]

[deploy]
  strategy = 'bluegreen'

[http_service]
  internal_port = 3000
  force_https = true
  auto_stop_machines = 'suspend'
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']

  [[http_service.checks]]
    interval = '30s'
    timeout = '5s'
    grace_period = '10s'
    method = 'GET'
    path = '/resource/healthcheck'

[[vm]]
  size = 'shared-cpu-1x'
  memory_mb = 256
  cpus = 2

dockerfile

FROM denoland/deno:2.1.4 AS base

FROM base as build
COPY . /app/
WORKDIR /app
RUN deno install
RUN deno task build

FROM base
COPY --from=build /app /app
WORKDIR /app
CMD ["deno", "task", "start"]

Hi… This is a multiple of the number of CPUs:

Minimum memory is 256m * shared CPU size or 2048m * performance CPU size.

Since cpus = 2 was specified in the [[vm]] stanza, that came out as 512MB.

(I would like more flexibility in this as well, actually. In particular, a performance-class Machine with only 256MB would be handy in the age of CPU throttling, for things that just need to wake up once per day, do a bunch of SHA256 hashing within a super-locked-down context, and then go back to sleep.)

Hope this helps!

1 Like

Thank you so much!
I came across cpus property only recently and incorrectly assumed it’d have same effect as fly scale count 2. I guess this is actually used to attach more cpus to the same instance, which makes much more sense.

yea I was actually looking for ways to scale worker nodes based on queue performance, as far as i know I have two options, custom metrics and rely on fly scale, or use fly machines api directly and manage manually.

I agree with you more flexibility would be great.

[processes]
  app = 'npm run start'
  worker = "npm run start:worker'"
[[vm]]
  size = 'shared-cpu-1x'
  memory_mb = 512
  processes=['app']

[[vm]]
  size = 'shared-cpu-1x'
  memory_mb = 256
  processes=['worker']

this is what i use now.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.