It looks like it stems from the LiteFS data directory being mounted on the root file system rather than the volume mount in this post.
LiteFS acts as a passthrough file system so it reports the underlying file system’s information. Since you were mounted on the root file system, it’s reporting the 7.8GB from /dev/vda which is the ephemeral root file system.
LiteFS acts as a passthrough file system so it has the underlying 100GB database file but it’ll also use additional space for storing the temporary transaction files that it ships between replicas and replicas will need space to hold snapshots when they’re delivered from the primary. I would guess you’d need to add an extra ~20% on top of the original database file size for it to work well.
As for LiteFS Cloud, we didn’t set a maximum database size initially but after running it for a bit we’re probably going to set one. So far all the databases running on there have been 10GB or less so we’ll likely set the cap around there.
That’s not to say we won’t increase the maximum size at some point but right now we want to make sure we minimize disruption to any other tenants on the system.
I don’t remember it 100%, because I reset it quickly, but it was something like “cluster not found” or “could not find cluster” or something in that direction. Right after the node became primary again.