Understanding litefs for "rarely up" architecture

I want to make sure I understand how LiteFS would handle the following architecture. Our use case is to support low usage BaaS apis. Ideally, each client would have 5 vms - 2 in the primary region and 3 in secondary regions. A nodejs app would run on each as would a NATS.io instance and multiple Sqlite databases (1-15 per app). All writes would go to one of the vms in the primary region and reads would be distributed to the backup in the primary region and the 3 secondary regions depending on the request location.

Because these apps are low usage, we want to shut down all vms when not in use. We are ok with the cold start delay even on the primary. What I am trying to understand is how feasible is this approach with LiteFS. There is a possibility that one of the VMs in the secondary region has been down for multiple days while the primary has been chugging along. From the docs, it sounds as though this is fine though there will be a period of time before the secondary catches up. Depending on the time, we may very well reroute requests to another secondary until it is caught up. Maybe we also light up all the secondarys after say 6 hours of downtime to refresh themselves periodically so the data lag is not so bad.

What are the potential pitfalls to this approach?

Also, is there any lag with LiteFS when starting a firecracker VM? Or can I expect it to normally fire up within 300-500ms even with LiteFS?


This approach works fine for the secondaries although you’ll probably want to keep your two primary nodes up at all times. If your primaries shut down for a while then one could lag far behind the other and then be promoted to primary. Another option is that you could use LiteFS Cloud to stream data into so that your primary always has the latest data to pull from when it comes up.

LiteFS nodes retain transaction change sets for a certain period of time (10m by default) so that nodes that disconnect can catch up without requiring a full snapshot download. If you’re going to have nodes that are offline for much longer, then you’ll want to increase the data.retention field in the litefs.yml config. You can find more info here in the docs.

As for lag, LiteFS generally starts up very quickly. It simply needs to make a call to Consul to find the current primary and then connect to it and stream changes down. Latency is largely determined by how many writes you get to your database and how long it takes for the replica to download the changes. How large is your database and how many writes per second (or per minute or per hour) does it receive?

1 Like

That makes sense. Is there a way to keep just one primary and prevent the secondaries from taking over? The primary goes down and then as long as it comes up relatively quickly, I’d be ok with the delay. Basically really going “serverless” on everything. The databases would range in the 1-10GB range and writes could be measured in hours - maybe 50 over an hour concentrated in a couple of 3-5 minute spans.

As long as your secondaries have lease.candidate set to false then they won’t try to become the primary. They’ll stay as read-only replicas.

It sounds like you could bump up your data.retention relatively high (e.g. "24h") and be ok. The main downside to higher retention is increased disk space for the files. The change sets are compressed though so that typically helps quite a bit. You’ll just need to test it out for your use case and see how large the retained files are and then adjust your volume sizes accordingly.

If I were to pack 100s of databases into 1 VM, is there a fixed size for the single replication stream from the FUSE mount? Any suggested limits? The FAQ seems to indicate 100 writes per second. I assume that is for the entire FUSE dir not for each database?

There aren’t any limits for the stream except that each database will require some RAM and additional file descriptors.

You should see higher than 100 writes per second if you’re using multiple databases. The main limitation is that writes are serialized for a single database and there’s FUSE call latency, however, multiple FUSE calls can occur simultaneously across different databases independently.

1 Like

What would the impact be if databases are opened/closed per web request? A test we ran indicates there is about a 15ms overhead to this approach per request with 100 simultaneous clients opening 100 different databases which is more than acceptable for our purposes. The overhead is the same whether we use LiteFS or not which makes sense since my assumption is the only impact to FUSE is on writes and it would not matter whether we are opening/closing the databases or opening and leaving open in memory. Is this assumption correct or am I missing an important point?

Ultimately, our approach will be to leave the most used databases open in memory and close down the dbs not used recently, but figured we’d start with the more extreme approaches first to figure out the pros/cons.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.