We’ve just released a new version of LiteFS! If you haven’t seen LiteFS before, it’s a distributed file system for automatically replicating SQLite databases around world. It’s great for read-heavy applications that need low latency.
This release focused on making LiteFS-based applications easier to run by making LiteFS more transparent.
Improving request redirection
Previously, your application needed to handle write request redirection and track consistency between requests so users could read their writes. That was a pain.
LiteFS now ships with a built-in, thin proxy layer that automatically reroutes write requests to the primary node and checks the replication position on replica nodes. This proxy uses HTTP methods and a client cookie to handle correct routing. You can find more details on our LiteFS proxy guide in the documentation.
Improving database migrations
Another frustration we heard was that it was tough to run migration scripts on deployment. The LiteFS primary node can change dynamically and only the primary node can execute write transactions so it was tricky to find the right node to run migrations on.
We’ve introduced two features to help with this. First, you can now have nodes automatically promote themselves to become the primary when they start up. This makes it easy to run your migrations on the current node since you’ll know it’s the primary.
The other option is called write forwarding. This allows a replica node to temporarily borrow the write lock from the primary node, issue writes locally, and then forward the write back to the primary.
You can find more details on our LiteFS database migration guide.
We’ve also added a few additional improvements:
- Transaction files are now compressed with LZ4.
- FUSE mounts can now be mounted by non-root users.
- Our FUSE library has been upgraded. Huge shoutout to Tv for his work on this.
- Added armv6 & armv7 release builds.
You can find a full list of features & bug fixes on the LiteFS v0.4.0 release page.
Hmmm…I tried this, but ran into an error while trying to have
USER nobody in the Dockerfile. Normally, I would say that this is missing some sys capabilities, but this is firecracker, not docker. Or this needs to have some udev rules, but there is no udev in the container. Does litefs still need to run as root and then the child process needs to change users?
2023-04-16T23:06:58Z app[e2865551b73148] ewr [info]wal-sync: short wal file exists on "main.sqlite", skipping sync with ltx
2023-04-16T23:06:59Z app[e2865551b73148] ewr [info]mount helper error: fusermount3: failed to open /dev/fuse: Permission denied
CMD ["/app/bin/run"] # sets up litefs env and runs mount
sh-5.1# cat /etc/fuse.conf |head -n10
# The file /etc/fuse.conf allows for the following parameters:
# user_allow_other - Using the allow_other mount option works fine as root, in
# order to have it work as user you need user_allow_other in /etc/fuse.conf as
# well. (This option allows users to use the allow_other option.) You need
# allow_other if you want users other than the owner to access a mounted fuse.
# This option must appear on a line by itself. There is no value, just the
# presence of the option.
sh-5.1# cat /etc/litefs.yml |grep fuse -A5
# Required. This is the mount directory that applications will
# use to access their SQLite databases.
# Enable mounting of the file system by non-root users.
# You must enable the 'user_allow_other' option in /etc/fuse.conf as well.
# The debug flag enables debug logging of all FUSE API calls.
# This will produce a lot of logging. Not for general use.
@tj1 The issue originally came up about running as a different user in Docker and that required the
--privileged flag. Inside the Firecracker, it’s a little more complicated.
"failed to open /dev/fuse: Permission denied" error is caused because the user you’re running as doesn’t have permission to
/dev/fuse. If you’re on an OS where
/dev/fuse is owned by a
fuse group then you’ll justneed to add your user to that
I got it running on Alpine but it doesn’t have a
fuse group and the
/dev/fuse device wasn’t available until the container started up. So I had to run without the
USER command in the Dockerfile and in the script called by
CMD I changed the ownership:
chmod g+rw /dev/fuse
chgrp fuse /dev/fuse
Also, the volume mount ends up being owned by
root so I had to recursively change permissions, which isn’t ideal:
chown -R litefs /var/lib/litefs
And finally I ran
litefs mount as my
exec su - litefs -s /bin/bash -c "/usr/local/bin/litefs mount"