No problem!
What seems to have happened here is a WAL overflow. The write ahead logs only went back about 12 hours, but were eating up 8GB of disk space. Which means the disk was too small for your workload.
This is technically something you could have prevented had you known to watch for it, but’s also a poor UX on our part. When we ship volumes that auto expand, it’ll fix this. But we’re thinking about other ways to help people prevent this problem in the future.
I think your DB was prone to this because it has a high ratio of writes to total data size. You should probably expect to keep your disks 10-20x bigger than your DB size if that ratio sticks.