I have a SQLite database and wanted to use it with LiteFS and LiteFS Cloud.
The docs suggested that I need a persistent volume for this to work, so I mounted a 100GB volume to the directory that houses all my data (inclusive the SQLite db files).
Now, when I run df -h, I see another, much smaller, volume mounted to my sqlite directory.
I had the impression this volume is only virtual, and LiteFS uses it to keep track of writes, but somehow my original volume doesn’t get filled up, only the LiteFS one.
It looks like it stems from the LiteFS data directory being mounted on the root file system rather than the volume mount in this post.
LiteFS acts as a passthrough file system so it reports the underlying file system’s information. Since you were mounted on the root file system, it’s reporting the 7.8GB from /dev/vda which is the ephemeral root file system.
It looks like it stems from the LiteFS data directory being mounted on the root file system rather than the volume mount in this post .
Ah, this makes total sense, thanks! That’s why the litefs and vda numbers are the same. I’ll change the mounting point of my volume and check again.
How large is your SQLite database?
Right now, just under 100GB, but growing.
I saw I could extend volumes, so I thought I would start with 100GB and extend as I go.
LiteFS acts as a passthrough file system so it has the underlying 100GB database file but it’ll also use additional space for storing the temporary transaction files that it ships between replicas and replicas will need space to hold snapshots when they’re delivered from the primary. I would guess you’d need to add an extra ~20% on top of the original database file size for it to work well.
As for LiteFS Cloud, we didn’t set a maximum database size initially but after running it for a bit we’re probably going to set one. So far all the databases running on there have been 10GB or less so we’ll likely set the cap around there.
That’s not to say we won’t increase the maximum size at some point but right now we want to make sure we minimize disruption to any other tenants on the system.
Is this different from deleting the secret via the Web UI? Because I did it that way and my machine would start into an endless loop of finding its cluster.
I don’t remember it 100%, because I reset it quickly, but it was something like “cluster not found” or “could not find cluster” or something in that direction. Right after the node became primary again.
Sorry, I double checked and it looks like the web UI just stages the secret changes. You’ll need to do a fly deploy to remove them from your machines.
You can use it together with Litestream if you’re just running a single static LiteFS primary. We have plans to add something similar to Litestream directly into LiteFS (#18) in the future.
If you don’t need read replicas then you can just use Litestream on its own.