Hello! I’m not sure if I’m misunderstanding something, or if the docs are only half migrated to V2.
It sounds like on the page about storage that volumes are a slice of disk on the host. This means a given volume is only attached to a single host, and any machines must boot on that host to use that volume. But on the section of scaling an app with volumes it talks about “unattached volumes” and using those “in the machine’s region” first before creating new ones. That sounds like some kind of network attached storage, but the other doc is very clear that it’s not.
Does this concept of an unattached volume mean that there’s a host out there, in that region, that has my volume on its disk, but none of my machines are running there? But if we were to run there in the future it would put the machine on that host and be able to use the volume? And then does this lead to the issue like this one where there’s a volume that exists as a directory on a host that doesn’t have room to run any new machines, and so when it goes to use that machine it can’t boot but doesn’t want to spin up a new volume on another host, so it fails?
If my guess is right, how does this work during failures / host shutdowns? If I have a volume on a host and the host goes down I assume that volume is considered gone. If I have my desired scale set to 1 or higher, I assume something would now see I have fewer than my desired number of machines, which would then provision a new machine including volume (an empty directory) on a new host and then boot there, all automatically without me having to do anything? Or is it generally on me to notice when a host disappears and thus a volume is lost, and I have to do something manual to bring it up on a new host?