First of all, I am new to the backend and server, so I apologize if this question is rather obvious…
I recently started to get emails regarding out-of-memory. The problem is I am still building my website. So I am probably the only one who visits this deployed site (besides some bots, perhaps?)
Here are the messages I got, one per each email:
Out of memory: Killed process 545 (node) total-vm:966264kB, anon-rss:159764kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4276kB oom_score_adj:0 Out of memory: Killed process 545 (node) total-vm:965316kB, anon-rss:159828kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4268kB oom_score_adj:0 Out of memory: Killed process 544 (node) total-vm:962488kB, anon-rss:158044kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4060kB oom_score_adj:0 Out of memory: Killed process 544 (node) total-vm:966048kB, anon-rss:159456kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4236kB oom_score_adj:0 Out of memory: Killed process 544 (node) total-vm:963616kB, anon-rss:160320kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:4148kB oom_score_adj:0
I am on a free tier and had this remix-based website up and running for a while, just for deployment testing purposes, but I only started to receive emails after Mar 22, when I began developing more seriously. So I looked at my commits around that time and found four things that might have caused the problem:
server-sent event: I added some code to receive events from the server (see Using server-sent events - Web APIs | MDN). But as far as I know, it’s pretty lightweight so I am skeptical.
supabase realtime: I added some code to listen to their websocket (see Presence | Supabase Docs). Could this cause a memory issue?
fast-geoip: I added some code to get the IP address using fast-geoip (see GitHub - onramper/fast-geoip: A faster & low-memory replacement for geoip-lite, a node library that maps IPs to geographical information), which states, “This library tries to provide a solution for these use-cases by separating the database into chunks and building an indexing tree around them, so that IP lookups only have to read the parts of the database that are needed for the query at hand. This results in the first query taking around 9ms and subsequent ones that hit the disk cache taking 0.7 ms, while memory consumption is kept at around 0.7MB.”
gitpod: my repo is based on remix blues stack (see Configure Gitpod for one-click quickstart by jacobparis · Pull Request #58 · remix-run/blues-stack · GitHub), which installs fly, but not sure how this might cause a problem. But just putting anything ever so slightly related to fly or server usage…
Could anyone kindly point me in the right direction to diagnose the problem? Or just any guidance on reading the log messages? I could manually remove one at a time and test each case, but I would love to learn the way to look at the log, understand, and fix the problem.