Sqlite/liteFS with "app servers" and a "worker server"

Hi all,

I’m working on a SvelteKit app which we’re expecting to have many many many more DB writes than reads. We’re using sqlite, since (IMO) this seems like a pretty good use case for that.

For now we’re only worrying about deploying in one region, since we only expect Australians to use it.

So far development has been good. I’ve set up a single app, which has a single volume–the DB lives on this volume at the moment. Everything is fine (although not that scalable?). However, I now need to host a component of our app which runs periodically (say, every 15 minutes). I’d like to have a server which stays up and runs cron (or similar) to do this.

This is where the problem occurs, these instances can’t share a volume (for good reason as I’ve discovered reading this forum).

It seems like LiteFS is the solution to this problem, but I have a couple of questions.

  1. It seems that LiteFS can’t handle multiple instances of the DB being written to. Is that correct?
  2. In the example Go app this is solved by sending all write requests to the “primary” node as decided by consul. Only a small subset of my users can write data (admins), my plan is to forward all of them to the primary node. Does this make sense? That way they won’t have any data consistency issues too.
  3. Can I have my “worker” app join the same consul network as my “app” app? The alternative it seems would be to have the “worker” start in the background of the current primary node. This seems problematic if the primary node changes though (I suppose you’d have to be constantly polling to check?)
  4. Is this even the best way to have my “worker” access the database? I guess unless I had some internal API I’d have to have it always be the primary (which brings us back to the previous idea?).
  5. How would I decide where migrations should run?

Sorry for the muddled thoughts, I’d really appreciate some feedback on how others have gone about achieving similar things.

hey Harrison, good questions. Hopefully I can clear some of it up.

SQLite has a single-writer/many-readers model so it actually excels at the opposite use case: many more reads than writes.

Yes, this stems from SQLite’s single-writer model more than LiteFS itself though. If your app can handle eventual consistency then something like Mycelial might work better.

Yes, that makes sense.

The current version of LiteFS (v0.2.0) requires all writes to go to a single primary node which makes “worker” apps difficult to implement. The next version (v0.3.0) should have “write forwarding” so that non-primary nodes can execute writes locally and automatically forward them to the primary. Write forwarding will be slower than executing writes directly on the primary but it’ll make it easier to use and it’ll better allow for “worker” app use cases.

This is another issue that’s helped by write forwarding since you’ll be able to run migrations form any node. Migrations are tricky in distributed apps in general though since they need to be able to work with multiple versions of the app (since multiple are running when you do a rolling deploy).

Let me know if you have any other questions or suggestions for LiteFS. It’s still in its early stages but we want to make it as easy to use as possible.


If multi-writer SQLite with support for high read/write concurrency and strong consistency is needed, maybe mvsqlite is a suitable choice. The downside is that you need to build and operate a FoundationDB cluster.

Disclaimer: I’m the author of mvsqlite :slight_smile:


Welp, the perils of early adoption… So it’s quite hard to migrate if you have multiple instances and your doing rolling deployment even assuming you have compatible tables. Since each new deployment won’t be the primary at first it can’t migrate the database and can’t run start running so failing that deployment. Write forwarding will make things a whole lot easier :slight_smile:

1 Like