Can I share a LiteFS SQLite DB across separate apps?

Hello,

I am using the Kent Dodds’s Epic Stack so my litefs.yml looks like this: https://github.com/epicweb-dev/epic-stack/blob/main/other/litefs.yml

In this setup, each of my apps, production and staging, have their own sqlite.db file hence their own data.

Is it possible to point both my production and staging apps at the same database file, so they reuse the same data? I know LiteFS can be distributed within the same app by scaling up a primary instance to multiple replicas. My question is based on entirely different apps (potentially created on different Fly accounts). Another use-case might be I want to use production data in my local environment.

With Postgres or Mongo I would just reuse the same connection string or database URL on each app. What is the equivalent here?

I read this thread: Sqlite/liteFS with "app servers" and a "worker server"

I’m still not clear though on whether this is up-to-date, and how to go about implementing this. What is the unique DB identifier each app needs to point to?

Thanks!

Here you have the answer to your question.

If you want to do the same as you do with Postgres or Mongo, and solve the different Fly accounts issue, you can use Query (I’m the author).

Query is a Rust server for your remote SQLite databases and a CLI to manage them.

To set up a staging environment, you can create a branch from a production database and work with it. Essentially, a branch is a copy of the database that allows you to use the same data as in the production database and make modifications to it without affecting your production environment. This ensures that any changes you make can be safely tested and refined before being implemented in the live environment.

Another benefit of Query is that you can use Query Studio to explore and manipulate your remote SQLite databases.

I apologize for the auto-promotion :wink:

hey @gcv, thanks a lot for the answer. No worries on the plug, I need something just like Query Studio! I’ll check it out as soon as I get this working.

I attempted to reuse the same key and FLY_CONSUL_URL as per Ben’s suggestion, but am now getting this error: cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30

Here are my Fly logs:

bos [info] INFO Preparing to run: `docker-entrypoint.sh litefs mount` as root
bos [info] INFO [fly api proxy] listening at /.fly/api
bos [info]2023/10/30 11:23:58 listening on [fdaa:0:56a6:a7b:ec:e3ea:6ad6:2]:22 (DNS: [fdaa::3]:53)
bos [info]config file read from /etc/litefs.yml
bos [info]LiteFS v0.5.0, commit=39b247aa6b5b9cce970cccd61d0024c6f32aa732
bos [info]level=INFO msg="host environment detected" type=fly.io
bos [info]level=INFO msg="no backup client configured, skipping"
bos [info]level=INFO msg="Using Consul to determine primary"
bos [info]level=INFO msg="initializing consul: key=litefs/remilia-platform-0aa9-v2 url=https://:a38b2643-48e3-ef03-c8eb-54e06346320a@consul-iad-8.fly-shared.net/remilia-platform-0aa9-yexkqwo03kn1m38d/ hostname=e2865517fe5078 advertise-url=http://e2865517fe5078.vm.remilia-platform-0aa9.internal:20202"
bos [info]level=INFO msg="wal-sync: no wal file exists on \"cache.db\", skipping sync with ltx"
bos [info]level=INFO msg="wal-sync: no wal file exists on \"sqlite.db\", skipping sync with ltx"
bos [info]level=INFO msg="LiteFS mounted to: /litefs/data"
bos [info]level=INFO msg="http server listening on: http://localhost:20202"
bos [info]level=INFO msg="waiting to connect to cluster"
bos [info]level=INFO msg="cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30"
bos [info]level=INFO msg="cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30"
bos [info]level=INFO msg="cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30"
bos [info]level=INFO msg="cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30"
bos [info]level=INFO msg="cannot connect, \"consul\" lease already initialized with different ID: LFSC829A040769612D30"

This is the lease section of my fly.yml (the whole thing:):

lease:
  type: "consul"
  # candidate: ${FLY_REGION == PRIMARY_REGION}
  candidate: false
  promote: true
  # advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"
  advertise-url: "http://3d8d99e4b3d268.vm.remilia-platform-0aa9.internal:20202"

  consul:
    # url: "${FLY_CONSUL_URL}"
    url: "https://:a38b2643-48e3-ef03-c8eb-54e06346320a@consul-iad-8.fly-shared.net/remilia-platform-0aa9-yexkqwo03kn1m38d/"
    # key: "litefs/${FLY_APP_NAME}-v2" # added '-v2'
    key: "litefs/remilia-platform-0aa9-v2"

As you can see I commented out the env vars and hard-coded the url and key to match my production app.
For context my two apps are:

  1. remilia-platform-0aa9 (prod)
  2. remilia-platform-0aa9-staging (staging)

I got the consul.url above by looking at the env on my currently running prod app.

@benbjohnson any pointers would be appreciated sir :pray:

All you need to do is share the same key and the FLY_CONSUL_URL. The other changes are unnecessary.

The problem is to get the FLY_CONSUL_URL. Using fly consul attach, it is set as a secret. How did you get it?

I ran the fly consul attach command back when I created the app:

I was able to read it by SSH’ing into to my prod app and running:

fly ssh console --app remilia-platform-0aa9

root@3d8d99e4b3d268:/myapp# env | grep -i consul
FLY_CONSUL_URL=https://:a38b2643-48e3-ef03-c8eb-54e06346320a@consul-iad-8.fly-shared.net/remilia-platform-0aa9-yexkqwo03kn1m38d/

Restoring the candidate and advertise-url fields to their original values didn’t seem to make a difference, I’m still getting the same “lease already initialized” error. :thinking:

It is true :+1:

Have you attempted to detach Consul and attach it again?

good idea, I tried that but it didn’t seem to make any difference.

@benbjohnson sorry to bug you but was hoping you had some insight into the above “lease already initialized” error.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.