I run a single app on the “Legacy Hobby Plan“ in FRA. I occasionally get Fly.io emails stating that the app “ran out of memory and crashed“. The last such email is from some 6h ago. Yet, when I check the Grafana dashboard, I see that the app consumes less than 200MB RAM (AFAIK 256MB is the max for free instances).
Below is the 24h memory consumption. It is odd or just a coincidence that around the time I got that email, memory dropped down to around 150MB.
I wonder if the killing of this process caused the VM to exit, and thus it could not send its latest memory consumption figures. Remember the kernel will need some RAM too; there is not actually 256M available for the Java process. (I would have thought this graph would report the total usage though, so I agree there is some information missing here).
Have you changed your Java max memory settings? For this VM size, I’d consider ranges in the order of -Xmx128M to -Xmx180M.
Yeah, the memory metrics aren’t super-easy to interpret, in general. You can see a better breakdown by looking at Memory - Detailed in the per-instance panels, and /proc/meminfo within the Machine itself is the gold standard.
A 256MB Machine has only ~210MB to work with in reality, and like you said, that 210MB still has to accomodate the kernel’s data structures, PID 1, hallpass, etc.
It’s actually saying 2,263,508 KB, which is >2GB. Misreading that line is a super-common error, if you look in the forum archives, .
Also, just for future reference, the anon-rss field is the most useful one, in the OOM reports.
This is a typical behavior, 256 MB total, ~200 MB realistically available for apps. Anything higher than that will cause OOM. I presume that ~50 MB is getting wasted for OS internal structures and Fly’s control daemon.
To be able to handle peak loads, enabling swap memory is always a good idea. A general rule of thumb is SWAP = 2-4 x RAM size.