Hi! thanks for the response. Looking through our logs a bit more… could there be two things happening here?
some of our apps legitimately use too much memory, get killed by OOM. Understandable.
other apps do not (as far as we know!). It looks as if such small apps shutdown apparently gracefully (as part of auto-stop initiated by fly). AND THEN the OOM killer log line shows up (if we were to trust the log line ordering).
Is it likely that this is the sequence of events?
Small app runs fine, well under memory
auto stop is triggered (sigint, etc)
app spikes memory usage ON THE WAY DOWN
OOM killer joins the party… kills the app (that is on the way down anyways).
I have seen multiple apps exhibit the behavior where the OOM email shows up exactly when auto-stop happens…
I’m wondering if there are any other possiblities… like “auto-stop” causing a downsizing of the VM memory on the fly… OOM then kicks in not because of app spiking mem, but the VM spiking DOWN memory capacity. Seems far fetched, but would like to rule out.
In the meantime, we just gave all our apps more memory. Just more for understanding… any tidbit of info you can offer would be really helpful.
Auto-stop (or shutting-down VMs) wouldn’t downsize the memory of the VM itself. However, apps may take more memory during shutdown. It really depends on the way apps work though.
I’d take some memory profile first to understand the memory usage of the apps.