LiteFS and Rails with global and local databases

Hi :wave:

I’m looking for some help/advice/feedback on how to achieve my desired architecture. I’m working on a Rails application which would be used to serve local communities. Because of this, a lot of the data would be region specific. So I would like to have two databases (as far as my app is concerned), one global and one local. In reality I would have 1 + N databases, where N is the number of regions the app is deployed in.

I think I have a handle on how to set all this up on the Rails side, but I’m new to sqlite and LiteFS and I’m unsure exactly what is needed there to get what I want. For instance: can I do all of this with one LiteFS cloud cluster, or would I need one for each database? One region is going to be the primary on the global database, but each region should have a primary lease of the local database. So I guess I’m unclear on if the primary lease is per LiteFS cluster or per database within the cluster.

The other question I had was about how multiple machines in one region would operate. It seems like it is considered best practice to have at least 2 machines for high availability reasons. In the case where both machines are on, is it true that only one would be able to write to the database, and the other would be in a read-only role? Sorry, I’m sure this is so basic, but my only experience has been with single region deployments with a single primary postgres database, so the concept of my app being read-only is a new one for me.

Thanks for your time,
Will

I have a Rails app with similar characteristics. Actually, I have one database per event, and events are associated with regions (currently ~30 events across 5 regions). While I’m watching and preparing for LiteFS, I’m not using it… yet. My databases are all sqlite3. I’m also not deploying multiple machines in a single region, but the databases are copied to all other regions when activity goes idle so redirecting all traffic to another region is a matter of updating a config file and redeploying.

So I’ll state up front that the downsides of my current approach is momentary downtime every time I deploy a new version, and manual intervention is required to reconfigure regions in case of outages.

I’ve written up my approach here: Around the World With SQLite3 and Rsync · Fly. The index page for events is not password protected, but each individual event requires a password to access: https://smooth.fly.dev/.

If you have questions on how to adapt this approach to meet your needs, feel free to ask them here.

1 Like

Thank you Sam! I have read your post, and I found it very helpful! I would not have gotten as far as I have without it.

I’m not sure that I want to go down the route of rsyncing databases around, just because it’s getting further away from my wheelhouse. I was hoping LiteFS would be the more “turnkey” version of that setup, which may or may not be the case.

I will keep tinkering, and I’ll take a closer look at the source code of your shared app to see if I can gain any further insights. Thanks again :pray:

Hi Will! :wave:

You’ll need a global LiteFS cluster and then you’ll need one cluster for each of your regions. We’ve had people ask about each database having its own primary so it’s something we’re considering.

A cluster is identified by the "key" used in the Consul config. You’ll also need to have separate directories & ports so the two litefs processes don’t overlap. Your 2 config files can look something like:

Global Configuration

fuse:
  dir: "/litefs/global"

data:
  dir: "/var/lib/litefs/global"

http:
  addr: ":20202"

lease:
  type: "consul"
  advertise-url: "http://${HOSTNAME}:20202"
  consul:
    key: "${FLY_APP_NAME}/global"

Regional Configuration

fuse:
  dir: "/litefs/regional"

data:
  dir: "/var/lib/litefs/regional"

http:
  addr: ":20203"

lease:
  type: "consul"
  advertise-url: "http://${HOSTNAME}:20203"
  consul:
    key: "${FLY_APP_NAME}/${FLY_REGION}"

Yes, that’s correct. Whichever candidate node has acquired the lease from Consul will be the primary. If it fails then another candidate node will pick up the lease and become primary instead. LiteFS has a built-in HTTP proxy for automatically redirecting requests to the primary as needed, however, that won’t work since you have two separate clusters on each machine.

The FUSE mount provides information about which node is primary by reading from the .primary file. There’s some docs on that here.

1 Like

Thanks Ben, this is very helpful!

I know that I can respond with a Fly-Replay header to replay in a different region. But in this case I would want to replay in the same region, but just a different host. Is there a way to define (ideally based on the contents of the .primary file) which host the request is replayed on?


I also have one other question I forgot about earlier: getting the rails console to work in this environment. In my fly.toml file I have:

console_command = "litefs run -- /rails/bin/rails console"

This seems to be working, because I was able to crudely inspect the file system from my Rails console and see that the mounted /litefs directory was there and contained the database files. However when I try to query the database, I’m getting a response that there are no tables defined. So either those files are not syncing over properly, or my app is not running migrations when it deploys.

Just as an experiment I decided to run the migrations from the console with

ActiveRecord::MigrationContext.new("db/migrate").migrate

And after that I was able to query my table as expected. I then exited the console (destroying the ephemeral machine it was running on) and started up a new console to see if I could query the table, and I once again got an error that no tables were defined. This leads me to believe that something isn’t syncing properly, but I’m not sure how to troubleshoot further.

I can see the database on my LiteFS Cloud dashboard, and I can export it, but the exported file is not opening in my database explorer program:


I’m not sure if this is because the database is completely empty, or if it’s because it actually is encrypted.

You can use fly-replay to replay a request to a specific instance. We do it here in the proxy server: https://github.com/superfly/litefs/blob/main/http/proxy_server.go#L232-L233

The contents of the .primary file will be the lease.hostname field in the Lease config which defaults to the value returned by hostname(1) on the current primary node. On Fly, the hostname is the machine instance ID which you can use in fly-replay:

fly-replace: instance=$MACHINEID

The litefs run command is used for one-off promotion & locking features on an already running LiteFS mount. You’ll want to use the litefs mount to start up the LiteFS FUSE mount & server to get everything working.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.