VM Killed Despite 90% Free Memory: Out-of-Memory Error

I am experiencing out-of-memory errors and my virtual machine is getting killed even though I have approximately ~90% of my memory available.

I have a dedicated-cpu-1x virtual machine with 2GB of memory. I set my JVM memory size to 1.75GB, but despite having sufficient memory, my virtual machine is getting killed. Here are the logs (refreshes every 5 seconds);

My java command;

ENTRYPOINT exec java -server \
                     -Xms1750m -Xmx1750m \
                     -XX:+UseStringDeduplication \
                     -jar \

According to the OOM killer’s log line, ~1.9GB of RSS was in use. That’s not something we control, that’s the kernel’s built-in OOM killer doing its thing. It always reports the correct amount of memory used by the killed process. Are you sure your calculations are correct?

I’d try fly ssh console into the instance and look at what’s using memory in the VM.

I don’t know enough about the JVM, but does it pre-allocate all the memory it might use? Is there any overhead?

I’d try fly ssh console into the instance and look at what’s using memory in the VM.

Where/how to check?

does it pre-allocate all the memory it might use?

Yes, basically, it limits the memory and does not allow it to be exceeded. If it is exceeded, Java OOM Error is thrown. So, I’m pretty sure about the memory calculation, which can be found here; Runtime (Java Platform SE 7 )

Here is my code (in Clojure - a JVM language);

(let [total-memory (.totalMemory (Runtime/getRuntime))
      free-memory (.freeMemory (Runtime/getRuntime))
      used-memory (- total-memory free-memory)
      memory-ratio (double (/ used-memory total-memory))]
    (format "Used memory: %s - Free memory: %s - Total memory: %s - Memory ratio: %s"
      used-memory free-memory total-memory memory-ratio)))

Depends on the tools available in your Docker image, but something like: top, htop and free can tell you about current memory consumption / availability inside your VM.

The OOM killer does not care about the JVM’s internal usage. It mostly only cares about how much memory RSS (Resident Set Size) is allocated per process and in total.

On Fly, you’re running a whole virtualized server environment, not just your app’s binary / code. If your machine is restricted at 2GB of memory and the JVM uses nearly as much, it doesn’t matter what’s reported from inside the JVM.

Reading a bit about the JVM and the -Xms and -Xmx flags, I come to understand that only sets the “heap” limits and Java can use more memory than that. From the diagnostics guide:

Note that the JVM uses more memory than just the heap. For example Java methods, thread stacks and native handles are allocated in memory separate from the heap, as well as JVM internal data structures.

-Xms is also only meant for the “starting” size of the memory allocated to the heap. I don’t know if that should be set to the same value as your -Xmx setting.

I’d try setting -Xmx to something even lower like 1500m and either unsetting -Xms or setting it to something even lower. I don’t know if -Xms matters too much here.


Thanks for the info! Seems that my current HTTP server version has some bugs that lead to memory leaks…