Confusion on LiteFS and multiple DBs and Clusters and NocoDB

I have some confusion on some of the workings of LiteFs and how to set the following up. Maybe some others will have a different approach. I know this is complicated, and I am trying to lay out all the questions before I start work. Sorry for the big write up.

The general thing I am trying to solve for.

Have NocoDB setup as a back of house admin panel, and have a website set up that can read from that data only. Likewise the Nocodb can import to read-only the website dataset. If anything needs to be replicated, I can use webhooks and route traffic to the correct primary and app using the .local.

The idea here is that I don’t want NocoDB to server traffic to the website. I am ok waiting for the replication to occur before the data is available to the sites. And the website should only write to it’s database, because its user generated content, and I want to expose it for review in NocoDB, but not allow editing as of right now.

My thought for setting this up -

I want to run NocoDB as a separate app, and only it will be the primary and control writing to the databases it owns. NocoDB suggests having its meta database separate from the user (of nocodb) created data.

litefs.nocodb.yaml

fuse:
  dir: "/usr/app/data/"
data:
  dir: "/var/lib/litefs-nocodb"

Would this be one LiteFS Cloud instance, and volume?

And in that volume would be two databases?

Will LiteFS and LiteFS cloud reach both databases and sync them?

Will one Consul Lease handle both DBs?

For the web app, I will be using Go. I plan to setup another app, and have it have a separate LiteFS cluster, with a different location for the data? The idea here is that it would be a different location for having two LiteFS clusters setup.

litefs.web.yaml

fuse:
  dir: "/litefs/dbs"
data:
  dir: "/var/lib/litefs-web"

Do I need to mount it as a separate volume, or can it be a folder within a volume that is shared?

According to the machines API doc, it says it only supports one volume per machine, Does that apply when not directly using the API?

The part I then struggle with, is how to get the clusters shared across the other apps.

Would each deploy, run two commands of LiteFS Mount?

For example in a process group in fly.toml -

[processes]
  web = "litefs mount -config /path/to/litefs.web.yaml; litefs mount -config /path/to/litefs.nocodb.yaml"
  nocodb = "litefs mount -config /path/to/litefs.nocodb.yaml; litefs mount -config /path/to/litefs.web.yaml"

Does that work, is this the best approach?

Another item that is hurting my head, is if my PRIMARY_REGION is set to ord, how do you determine that on the only in the nocodb process should it be made the primary when running the litefs.nocodb.yaml config, and only in the web process can be the primary when running the litefs.web.yaml.

I don’t see process specific env variables in the fly.toml to identify that it could be primary. So candidate selection would be, is primary region, and is allowed to be primary.

Finally, nocodb and web should only run in their app, so what is the correct way to use LiteFS config with exec, so i can get the replication in another server (nocodb → web), but only run the actual application (nocodb) in the primary of the nocodb process group. The inverse for web process group as well.

exec:
  - cmd: "run nocodb"
    if-candidate: true

Would this hang forever if it was not candidate, would the machine shut down eventually?

Nocodb will be limited to 1 machine, because it does not have a read only mode, even disabling migrations still tried to write.

Eventually, I think I would prevent external IPs, and use a fly tunnel to access the admin panel itself for extra security.

This is why I ruled out running both processes in the same VM, and proxying the connection to a different port with a rewrite using the web server.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.