Running out of memory and crashing: how can I diagnose where the error is coming from?

First of all, I am new to the backend and server, so I apologize if this question is rather obvious…

I recently started to get emails regarding out-of-memory. The problem is I am still building my website. So I am probably the only one who visits this deployed site (besides some bots, perhaps?)

Here are the messages I got, one per each email:

Out of memory: Killed process 545 (node) total-vm:966264kB, anon-rss:159764kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4276kB oom_score_adj:0

Out of memory: Killed process 545 (node) total-vm:965316kB, anon-rss:159828kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4268kB oom_score_adj:0

Out of memory: Killed process 544 (node) total-vm:962488kB, anon-rss:158044kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4060kB oom_score_adj:0

Out of memory: Killed process 544 (node) total-vm:966048kB, anon-rss:159456kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4236kB oom_score_adj:0

Out of memory: Killed process 544 (node) total-vm:963616kB, anon-rss:160320kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4148kB oom_score_adj:0

I am on a free tier and had this remix-based website up and running for a while, just for deployment testing purposes, but I only started to receive emails after Mar 22, when I began developing more seriously. So I looked at my commits around that time and found four things that might have caused the problem:

  1. server-sent event: I added some code to receive events from the server (see Using server-sent events - Web APIs | MDN). But as far as I know, it’s pretty lightweight so I am skeptical.

  2. supabase realtime: I added some code to listen to their websocket (see Presence | Supabase Docs). Could this cause a memory issue?

  3. fast-geoip: I added some code to get the IP address using fast-geoip (see GitHub - onramper/fast-geoip: A faster & low-memory replacement for geoip-lite, a node library that maps IPs to geographical information), which states, “This library tries to provide a solution for these use-cases by separating the database into chunks and building an indexing tree around them, so that IP lookups only have to read the parts of the database that are needed for the query at hand. This results in the first query taking around 9ms and subsequent ones that hit the disk cache taking 0.7 ms, while memory consumption is kept at around 0.7MB.”

  4. gitpod: my repo is based on remix blues stack (see Configure Gitpod for one-click quickstart by jacobparis · Pull Request #58 · remix-run/blues-stack · GitHub), which installs fly, but not sure how this might cause a problem. But just putting anything ever so slightly related to fly or server usage…

Could anyone kindly point me in the right direction to diagnose the problem? Or just any guidance on reading the log messages? I could manually remove one at a time and test each case, but I would love to learn the way to look at the log, understand, and fix the problem.

Thanks!

Hey! Memory issues can be kind of tough to pin-down and depend on a lot of things.

Node in particular is pretty memory hungry, I believe. Memory could be inflating because of one of the things you bring up or it could be just that one of the code paths you already had increases memory and is slow to (or never) frees the memory.

A couple things you could try to narrow your issue down or resolve your issue (in no particular order):

You could profile your app to get a feel for what’s causing the memory to spike. This has some good ideas: javascript - Memory leaks in Node.js - How to analyze allocation tree/roots? - Stack Overflow. Profiling can tell you a lot about what’s going on, but it might also wind up telling you you just need more memory. I think if I were dealing with this I’d do most of this profiling locally, if you develop this locally at all.

You could use something called “swap” that lets your app VMs use disk to get by on less RAM. This community post describes it pretty well: Swap memory - #3 by scottohara. We use swap for some of our default Rails templates now described in docs here: Optimizing your deployment · Fly Docs. This isn’t a stellar option but you might be able to unblock yourself if you need to get around it now.

You could increase your VM size. Here’s our docs on how to adjust memory: Scale V1 (Nomad) Apps · Fly Docs. It’s worth noting the costs here as well as that each CPU size has a memory limit: Fly App Pricing · Fly Docs. This is probably your best solution. There’s a chance there are no “memory issues” the RAM is just too small.


I was rereading your notes; are you running gitpod on the same app you have the blues-stack deployed? I would guess that’s the biggest memory hog. I recall running a gitpod requires a fair bit of memory. I’m not sure if this thing linked is the exact thing you’re running but this doc says 8GB or 16GB for a better experience: Installation requirements for Gitpod Self-Hosted

2 Likes

Thanks a lot! I appreciate your thoughts. :smile: I haven’t gotten around the issue yet as I have other tasks at hand, but you provided a great starting point for debugging. I’ll definitely follow your suggestions.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.