Scaling Ephemeral Disk

On one of my 256MB shared-cpu-1x instances, it looks like it has 7.8GB of ephemeral space—which is a lot—but I think I may need more. I know I can attach a persistent volume but then I don’t think I can use autoscaling.

Is it possible to specify the ephemeral disk size for instances?

1 Like

It’s not! We actually don’t intend for the ephemeral disk space to be written to. It’s writable just because so much stuff breaks without it, but we’d make it readonly if we could.

You can use autoscaling with persistent volumes, though, so that’s worth a try. We’ll have a better way to autoscale with disks in a few months.

Right on. I thought autoscaling wouldn’t work since the instances are pinned to volumes. Thanks, Kurt!

Yeah it’s not intuitive. You can think of autoscaling with volumes as just turning machines off and on. They’ll come back with the same IPs when they boot. The biggest problem is you can’t really control which go away on scale down, it’s a big Nomad evaluation lottery.

We currently use the ephemeral disk for a couple of nice purposes, mostly caching of content that is not performance-critical enough that it needs to go into memory. We also use a small swapfile to allow us to more efficiently use the low-memory instances (this makes using a GC’d language like Go far more tenable on the 256MB instances). Just pointing out examples that are actually using the disk beneficially and not just as an oversight. While the current disk size is more than sufficient for our uses, we would not want to see this capacity disappear.

Oh yeah! We’re totally fine with that. The problem is mostly that it’s a surprising UX for people who don’t know how it works. If a VM crashes it comes back with a fresh image rootfs.

Ephemeral disks on AWS retain files between restarts.

This is all to say we probably won’t make it so you can run larger rootfs mounts, but you’re totally fine writing to it for scratch space.

1 Like