Why Does LiteFS Use an Extra Volume?

I have a SQLite database and wanted to use it with LiteFS and LiteFS Cloud.

The docs suggested that I need a persistent volume for this to work, so I mounted a 100GB volume to the directory that houses all my data (inclusive the SQLite db files).

Now, when I run df -h, I see another, much smaller, volume mounted to my sqlite directory.

Filesystem                Size      Used Available Use% Mounted on
devtmpfs                  1.9G         0      1.9G   0% /dev
/dev/vda                  7.8G    557.4M      6.8G   7% /
shm                       1.9G         0      1.9G   0% /dev/shm
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/vdb                 98.4G      4.8G     89.6G   5% /app/data
litefs                    7.8G    557.4M      6.8G   7% /app/data/sqlite

I had the impression this volume is only virtual, and LiteFS uses it to keep track of writes, but somehow my original volume doesn’t get filled up, only the LiteFS one.

I need 100GB, so what’s the solution here?

It looks like it stems from the LiteFS data directory being mounted on the root file system rather than the volume mount in this post.

LiteFS acts as a passthrough file system so it reports the underlying file system’s information. Since you were mounted on the root file system, it’s reporting the 7.8GB from /dev/vda which is the ephemeral root file system.

How large is your SQLite database?

It looks like it stems from the LiteFS data directory being mounted on the root file system rather than the volume mount in this post .

Ah, this makes total sense, thanks! That’s why the litefs and vda numbers are the same. I’ll change the mounting point of my volume and check again.

How large is your SQLite database?

Right now, just under 100GB, but growing.

I saw I could extend volumes, so I thought I would start with 100GB and extend as I go.

LiteFS acts as a passthrough file system so it has the underlying 100GB database file but it’ll also use additional space for storing the temporary transaction files that it ships between replicas and replicas will need space to hold snapshots when they’re delivered from the primary. I would guess you’d need to add an extra ~20% on top of the original database file size for it to work well.

As for LiteFS Cloud, we didn’t set a maximum database size initially but after running it for a bit we’re probably going to set one. So far all the databases running on there have been 10GB or less so we’ll likely set the cap around there.

That’s not to say we won’t increase the maximum size at some point but right now we want to make sure we minimize disruption to any other tenants on the system.

1 Like

I would guess you’d need to add an extra ~20% on top of the original database file size for it to work well.

Thanks for the heads up! I’ll probably go for 150GB and see when it blows up. :rofl:

So far all the databases running on there have been 10GB or less so we’ll likely set the cap around there.

Ah, sad to hear that. Whelp, then I have to think of an alternative backup plan :smiley:

If you’re running a static LiteFS primary then Litestream should work as a streaming backup option.

1 Like

Thanks, I’ll check it out.

How do I safely remove the LiteFS Cloud cluster from my machines?

If you unset the LITEFS_CLOUD_TOKEN secret and then restart your machines then it’ll disconnect from LiteFS Cloud:

$ fly secrets unset LITEFS_CLOUD_TOKEN
1 Like

Is this different from deleting the secret via the Web UI? Because I did it that way and my machine would start into an endless loop of finding its cluster.

It should work the same. What’s the error you’re seeing when it’s trying to find its cluster?

1 Like

I don’t remember it 100%, because I reset it quickly, but it was something like “cluster not found” or “could not find cluster” or something in that direction. Right after the node became primary again.

Got the error message

level=INFO msg="backup stream failed, retrying: fetch position map: backup client error (404): cluster not found: ..."
level=INFO msg="begin streaming backup" full-sync-interval=10s
level=INFO msg="exiting streaming backup"

When I add it back I get:

level=INFO msg="backup stream failed, retrying: backup stream error (\"core.db\"): open ltx file: open /app/data/litefs/dbs/core.db/ltx/0000000000004db5-0000000000004db5.ltx: too many open files"
level=INFO msg="begin streaming backup" full-sync-interval=10s
level=INFO msg="exiting streaming backup"

Sorry, I double checked and it looks like the web UI just stages the secret changes. You’ll need to do a fly deploy to remove them from your machines.

1 Like

Do I use this together with or instead of LiteFS?

You can use it together with Litestream if you’re just running a single static LiteFS primary. We have plans to add something similar to Litestream directly into LiteFS (#18) in the future.

If you don’t need read replicas then you can just use Litestream on its own.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.