I have an app that processes files and uploads them to some bucket. After processing, the files are deleted from the OS temp folder.
Since the app doesn’t have an associated volume, I assumed these files would be downloaded to memory. But when I look at the metrics it just looks like the usual Node usage. There is no change when files are downloaded to the filesystem.
So I’m guessing these files are actually downloaded to the SSD? And if so, how much can I use?
# output from df -H
Filesystem Size Used Available Use% Mounted on
devtmpfs 100.6M 0 100.6M 0% /dev
/dev/vda 7.8G 531.8M 6.9G 7% /
shm 110.6M 0 110.6M 0% /dev/shm
tmpfs 110.6M 0 110.6M 0% /sys/fs/cgroup
NVMe, yes (:
/dev/vda (mounted as rootfs /) is VM’s ephemeral disk (see above) where the app’s docker file must inflate in to. df -H tells me, I’ve got around 7G out of the total 8Gs free.
You are reading this correctly. You are not breaking any rules. The only caveat is that the amount of available space isn’t something that is advertised or guaranteed forever. If you want a guarantee, use a volume, otherwise, enjoy it while it lasts!
Presumably another caveat is that it’s shared? i.e. if Pier’s ‘machine’ and mine are allocated to the same …machine then if he decides to use ~8TB to prove a point then suddenly I can’t use any more than I have already?