Based on Docker stats, my frontends hits max 90 MB at build, yet my 2048MB memory limit machine crashes with it

Hey there!

I’m trying to deploy my Nuxt frontend on Fly

I keep getting this very same message each time

2025-01-12T22:35:05.470 app[7843ed4a34d028] cdg [info] [ 96.430391] Out of memory: Killed process 648 (node) total-vm:74445100kB, anon-rss:383400kB, file-rss:716kB, shmem-rss:0kB, UID:0 pgtables:1304kB oom_score_adj:0
2025-01-12T22:35:06.350 app[7843ed4a34d028] cdg [info] /bin/bash: line 1: 648 Killed nuxt build

Out of memory

Message says Total VM : 74.4GB (so I’m not sure what is means here, I guess its total VM disk?)

Memory usage : 384MB (above what I see on my own computer, yet not so far, but still far below the 2048MB limit, I might be misunderstanding something, looks to be the case)

I tracked my usage on my Mac Studio, in Docker, with

docker stats frontend --format “table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}”

here is at peak :

Here is my dockerfile

#============ BASE IMAGE ============
FROM oven/bun:1

#============ WORKING DIRECTORY ============
WORKDIR /frontend

#============ NODE ENV ===========
RUN if [ "$NODE_ENV" = "production" ]; then \
    echo "\033[32m╔══════════════════════════════╗\033[0m" && \
    echo "\033[32m║      🚀 PRODUCTION 🚀       ║\033[0m" && \
    echo "\033[32m╚══════════════════════════════╝\033[0m"; \
    else \
    echo "\033[35m╔══════════════════════════════╗\033[0m" && \
    echo "\033[35m║      🎮 PLAYGROUND 🎮       ║\033[0m" && \
    echo "\033[35m╚══════════════════════════════╝\033[0m"; \
    fi

#============ SYSTEM PACKAGES ============
RUN apt-get update && apt-get install -y \
    python3 \
    make \
    g++ \
    curl \
    procps \
    sysstat \
    bc \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean \
    && echo 'while true; do mem=$(free -m | awk "/Mem:/ {printf \"%.1f%%\", \$3/\$2*100}"); cpu=$(top -bn1 | grep "Cpu(s)" | awk "{printf \"%.1f%%\", \$2}"); tput cup $(tput lines) $(tput cols); printf "\033[31m\033[2D MEM:$mem CPU:$cpu\033[0m"; sleep 1; done' > /usr/local/bin/monitor \
    && chmod +x /usr/local/bin/monitor

#============ APPLICATION SETUP ============
COPY package*.json bun.lockb ./
RUN bun install --frozen-lockfile
COPY . .

#============ ENVIRONMENT SETUP ============
ENV PYTHONUNBUFFERED=1
ENV HOST=${HOST_FRONTEND}
ARG NODE_ENV
ENV NODE_ENV=${NODE_ENV}

#============ NETWORK ============
EXPOSE ${PORT_FRONTEND}

#============ BUILD ============
RUN monitor & \
    if [ "$NODE_ENV" = "production" ]; then \
    echo "Memory usage before build:" && ps -o pid,rss,command ax | grep bun && \
    NODE_ENV=production bun run build && \
    echo "Memory usage after build:" && ps -o pid,rss,command ax | grep bun; \
    else \
    echo "Memory usage before build:" && ps -o pid,rss,command ax | grep bun && \
    bun run build && \
    echo "Memory usage after build:" && ps -o pid,rss,command ax | grep bun; \
    fi

#============ RUNTIME ============
CMD if [ "$NODE_ENV" = "production" ]; then \
    echo "Memory usage before start:" && ps -o pid,rss,command ax | grep bun && \
    echo "Starting Nuxt production server..." && \
    ls -la && \
    echo "Content of .output directory:" && \
    ls -la .output/ || echo ".output directory not found" && \
    echo "Content of .output/server directory:" && \
    ls -la .output/server/ || echo ".output/server directory not found" && \
    bun run start; \
    else \
    echo "Starting Nuxt development server..." && \
    NITRO_HOST=${NITRO_HOST} NITRO_PORT=${NITRO_PORT} NODE_ENV=${NODE_ENV} HOST=${HOST} PORT=${PORT_FRONTEND} bun run dev; \
    fi

here is my fly.toml

docker stats synaptic-frontend --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"



Based on Docker stats, my frontends hits max 90 MB at build, yet my 2048MB memory limit machine crashes with it..





app = "***************"
name = "***************"
primary_region = "cdg"

[build]
  dockerfile = "Dockerfile"

[env]
  HOST = "0.0.0.0"
  HOST_FRONTEND = "0.0.0.0"
  NODE_ENV = "production"
  PORT_FRONTEND = "8000"
  NITRO_HOST = "0.0.0.0"
  NITRO_PORT = "8000"
  CORS_ORIGINS = "*"
  NUXT_PUBLIC_API_BASE = "***************"
  PRIVATE_API_URL = "***************"
  PUBLIC_API_URL = "***************"
  PYTHONUNBUFFERED = "1"

[http_service]                                    # 🔺 Configuration for the HTTP service
  internal_port = 8000                           # 🔺 Port the application listens on internally  
  force_https = true                             # 🔺 Forces all traffic to use HTTPS
  auto_stop_machines = true                      # 🔺 Automatically stops idle machines to save resources
  auto_start_machines = true                     # 🔺 Automatically starts machines when traffic arrives
  min_machines_running = 0                       # 🔺 Ensures zero downtime during deployments by keeping at least one instance running
  max_unavailable = 0                            # 🔺 Maximum number of instances that can be unavailable during updates
  processes = ["app"]                            # 🔺 List of processes to run

  [[http_service.checks]]                        # 🔺 Health check configuration
    interval = "60s"                             # 🔺 How often to run health checks
    timeout = "20s"                              # 🔺 Maximum time to wait for health check response
    grace_period = "300s"                        # 🔺 5min for build - Initial delay before starting health checks
    method = "GET"                               # 🔺 HTTP method to use for health checks
    path = "/api/health"                         # 🔺 Endpoint to check for health status
    protocol = "http"                            # 🔺 Protocol to use for health checks

  [http_service.concurrency]                     # 🔺 Concurrency settings
    type = "connections"                         # 🔺 Type of concurrency limit
    hard_limit = 500                             # 🔺 Maximum number of concurrent connections
    soft_limit = 400                             # 🔺 Soft limit for concurrent connections

[[vm]]                                          # 🔺 Virtual machine configuration
  cpu_kind = "shared"                           # 🔺 Type of CPU allocation
  cpus = 2                                      # 🔺 Number of CPU cores
  memory_mb = 2048                              # 🔺 Amount of memory in MB
  min_machines_running = 1                      # 🔺 Minimum number of VMs to keep running

[deploy]                                        # 🔺 Deployment configuration
  strategy = "rolling"                          # 🔺 Rolling deployment strategy for zero downtime
  release_command = "bun run build"             # 🔺 Command to run during deployment

[mounts]                                        # 🔺 Volume mount configuration
  source = "nuxt_build"                         # 🔺 Name of the volume to mount
  destination = "/frontend/.nuxt"               # 🔺 Where to mount the volume in the container

[[statics]]                                     # 🔺 Static file serving configuration
  guest_path = "/frontend/public"               # 🔺 Path to static files in container
  url_prefix = "/public"                        # 🔺 URL prefix for serving static files

This is my very first time trying to deploy an app , I’ve been asking myself a lot if I should not just use a direct VPS, but fly seems really nice, and pretty good for what I’m doing, hence I wanna fix it instead.

Thanks a lot for any person taking the time to read me.

So, why would I hit the memory limit, if all indicates that I am faaar below it?

I have tried multiple variants of my fly.toml configuration at this point, did multiple deploys , nothing seems to fix it, coming here to seek some help

your build process is taking all the memory, u need to configure it to limit its build resources. You’ll need to read the docs for your stack for that.

Alright! For anyone that would find value in the solution.

Actually, I realized I could instead directly build app on my own machine, and deploy the output of it , and it worked as a charm.

Commands I used, for my nuxt app :

rm -rf .output
NODE_ENV=production bun run build
fly deploy

my fly.toml :

[http_service]                                   # 🔺 Configuration for the HTTP service
  internal_port = 8000                           # 🔺 Port the application listens on internally  
  force_https = true                             # 🔺 Forces all traffic to use HTTPS
  auto_stop_machines = true                      # 🔺 Automatically stops idle machines to save resources
  auto_start_machines = true                     # 🔺 Automatically starts machines when traffic arrives
  min_machines_running = 0                       # 🔺 Ensures zero downtime during deployments by keeping at least one instance running
  max_unavailable = 0                            # 🔺 Maximum number of instances that can be unavailable during updates
  processes = ["app"]                            # 🔺 List of processes to run

  [[http_service.checks]]                        # 🔺 Health check configuration
    interval = "60s"                             # 🔺 How often to run health checks
    timeout = "20s"                              # 🔺 Maximum time to wait for health check response
    grace_period = "300s"                        # 🔺 5min for build - Initial delay before starting health checks
    method = "GET"                               # 🔺 HTTP method to use for health checks
    path = "/api/health"                         # 🔺 Endpoint to check for health status
    protocol = "http"                            # 🔺 Protocol to use for health checks

  [http_service.concurrency]                     # 🔺 Concurrency settings
    type = "connections"                         # 🔺 Type of concurrency limit
    hard_limit = 500                             # 🔺 Maximum number of concurrent connections
    soft_limit = 400                             # 🔺 Soft limit for concurrent connections

[[vm]]                                          # 🔺 Virtual machine configuration
  cpu_kind = "shared"                           # 🔺 Type of CPU allocation
  cpus = 2                                      # 🔺 Number of CPU cores
  memory_mb = 2048                              # 🔺 Amount of memory in MB
  min_machines_running = 1                      # 🔺 Minimum number of VMs to keep running
  swap_size_mb = 1536                           # 🔺 Add swap space to handle memory spikes

[deploy]                                        # 🔺 Deployment configuration
  strategy = "rolling"                          # 🔺 Rolling deployment strategy for zero downtime

[mounts]                                        # 🔺 Volume mount configuration
  source = "nuxt_build"                         # 🔺 Name of the volume to mount
  destination = "/frontend/.nuxt"               # 🔺 Where to mount the volume in the container

[[statics]]                                     # 🔺 Static file serving configuration
  guest_path = "/frontend/public"               # 🔺 Path to static files in container
  url_prefix = "/public"                        # 🔺 URL prefix for serving static files