Feature Request: RAM Metrics

I’m considering moving some services to Fly that have a tendency to be RAM-constrained. I’d love it if you guys could add some sort of memory graph or stats or something to the metrics page. Thanks!

1 Like

These metrics are available, but we’ve been torn on how to present them. Would you want per VM metrics, or some kind of aggregate for a whole app, or something else do you think?

I would take a look how Heroku does it, when you scale multiple dyno on the same app, they do not add any more metrics or lines to the existing graphs.

Metrics gathered for all dynos

The following metrics are gathered for all process types, and are averages of the metrics of the dynos of that process type for a given application:

1 Like

I’d probably want to look at them per instance, though an aggregate graph could be useful. Maybe default to aggregate and later build a dropdown that allows me to poke around and make sure there isn’t something weird causing memory to spike on some subset of my instances?

When I started to scale dynos on Heroku my instincts were to find the usage per dyno, but after I thought more about it, it would be odd if one dyno was leaking more memory than others.

But that being said, the more data the better, so maybe a toggle between app wide average memory usage and a graph that has a memory line per VM would be nice as well.

1 Like

Nice, that’s helpful. We just added a quick aggregate memory graph to the metrics page, it shows average total memory and average available memory.

We’re going to let you break this out into per-VM stats at some point, but I think this is pretty useful.

2 Likes

Great!! Thanks for the quick response on this here.

Now we’ve revealed a new problem… what should to make of the fact that my micro-1x instances that are doing basically nothing yet have 2MB left!

For some reason this graph seems upside down. The reason I say this is because typically a graph like this will have a upper limit depending on the vm size to give you an idea where the limit is.

I just would rather view this data in the inverse way, how much I am using and where is the upper limit depending on my VM size.

Heroku below:

1 Like

@dan.wetherald we just pushed a change to show “Used” instead of “Free” memory, it looks like this now (note that “mem_total” is the total available, like the dashed Heroku line):

1 Like

Well those only have 128MB of RAM! What are you running in them? It wouldn’t surprise me if, like, Node fills that up doing the most basic of things.

I’m just running NGINX to proxy requests. My only concern here is that they freeze up somehow. If I could have some assurances that stuck instances get rebooted, then I think we’re good.

If you feel like sending a URL we can setup a health check on, we can monitor one of your apps and diagnose if it gets into a state where it’s not recovering.

so it turns out that my NGINX servers have room to spare, but some of my other ones do not. https://nikola-sharder.fly.dev is one that’s a little close to the line.