Litefs Cloud with Pocketbase

I wonder if anybody used Litefs Cloud with Pocketbase?

Why am I asking it?

I deployed my Pocketbase app in the ams region but I want to optimize users e.g closer to North America or Asia that’s why I’ll clone it to those regions.

As it is now, a pocketbase app needs a persistant volume which will be cloned to other regions but they won’t be replicated.

I know that Pocketbase ain’t for HA but this article might help me solve the problem but I haven’t tried it yet.

Here is my Dockerfile:

FROM alpine:latest

ARG PB_VERSION=0.16.7

RUN apk add --no-cache \
    unzip \
    openssh

# Download and unzip PocketBase
ADD https://github.com/pocketbase/pocketbase/releases/download/v${PB_VERSION}/pocketbase_${PB_VERSION}_linux_amd64.zip /tmp/pb.zip
RUN unzip /tmp/pb.zip -d /pb/

# Install required packages
RUN apk add --no-cache \
    ca-certificates \
    fuse3 \
    sqlite

COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs
COPY litefs.yml /litefs/litefs.yml

# EXPOSE 8080

ENTRYPOINT litefs mount
# Start PocketBase
# ENTRYPOINT ["litefs", "-addr", ":8081", "-dsn", "/litefs/db"]
# CMD ["/pb/pocketbase", "serve", "--http=0.0.0.0:8080"]


litefs.yml:

# The fuse section describes settings for the FUSE file system. This file system
# is used as a thin layer between the SQLite client in your application and the
# storage on disk. It intercepts disk writes to determine transaction boundaries
# so that those transactions can be saved and shipped to replicas.
fuse:
  dir: "/litefs"

# The data section describes settings for the internal LiteFS storage. We'll
# mount a volume to the data directory so it can be persisted across restarts.
# However, this data should not be accessed directly by the user application.
data:
  dir: "/var/lib/litefs"

# This flag ensure that LiteFS continues to run if there is an issue on starup.
# It makes it easy to ssh in and debug any issues you might be having rather
# than continually restarting on initialization failure.
exit-on-error: false

# This section defines settings for the option HTTP proxy.
# This proxy can handle primary forwarding & replica consistency
# for applications that use a single SQLite database.
proxy:
  addr: ":8080"
  target: "localhost:8081"
  db: "db"
  passthrough:
    - "*.ico"
    - "*.png"

# This section defines a list of commands to run after LiteFS has connected
# and sync'd with the cluster. You can run multiple commands but LiteFS expects
# the last command to be long-running (e.g. an application server). When the
# last command exits, LiteFS is shut down.
exec:
  - cmd: "/pb/pocketbase serve --http=0.0.0.0:8080"

# The lease section specifies how the cluster will be managed. We're using the
# "consul" lease type so that our application can dynamically change the primary.
#
# These environment variables will be available in your Fly.io application.
lease:
  type: "consul"
  advertise-url: "http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202"
  candidate: ${FLY_REGION == PRIMARY_REGION}
  promote: true

  consul:
    url: "${FLY_CONSUL_URL}"
    key: "litefs/${FLY_APP_NAME}"

fly.toml:

# fly.toml app configuration file generated for mili-lifets-pocketbase on 2023-07-05T21:40:03+02:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#

app = "mili-lifets-pocketbase"
primary_region = "ams"

[[mounts]]
  source = "litefs"
  destination = "/var/lib/litefs"

[http_service]
  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0

Now : ERROR: config file not found

The Pocketbase’s author:

Keep in mind that while you can use litefs or marmot to replicate the database, PocketBase wasn’t designed with multiple servers in mind and some things may not work as expected even if you sync the db files across the servers - custom event hooks, realtime subscriptions and anything else that relies on the core.App instance since technically you’ll have multiple separate applications and there is no builtin mechanism at the moment that will keep them in sync.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.