How do you properly configure a LiteFS deployment?

I’m trying to follow the tutorial and the example includes some things that I’m not certain are necessary for LiteFS to be deployed properly. Can I get some help knowing each piece that’s required? So far, here’s what I’ve gathered:

  1. /etc/litefs.yml
  2. experimental.enable_consul = true set in fly.toml

I noticed in the litefs.yml in the example it has this line:

# The path to where the underlying volume mount is.
data-dir: "/mnt/data"

I’m not sure what that is and whether I need to configure something like that in my app other than ensuring that directory is created in my Dockerfile

I also noticed that the example Dockerfile adds Fuse and SQLite but I’m not sure whether those are required (my non LiteFS app runs fine without them currently).

Also the example Dockerfile lists ENTRYPOINT "litefs". Is this required? It seems the getting started guide leaves a lot to people to dig through the example to understand how things are supposed to be configured. This is especially challenging when trying to deploy a different language server :grimacing:

EDIT: I also noticed that the base image for the Docker image is:

# Fetch the LiteFS binary using a multi-stage build.
FROM flyio/litefs:pr-109 AS litefs

Currently mine is set to FROM node:18-bullseye-slim as base. I’m a Docker noob and at this point I have no idea to get LiteFS running with Node. It seems like the example needs to have LiteFS be the primary process and my app runs as a subprocess. I’m not sure how to accomplish this.

EDIT: jk, my dockerfile does add sqlite, and I just discovered that fuse is used for logging.

My primary question remains: “How do I get LiteFS installed into my docker container if I also need Node.js in there to install dependencies and run my app?”

My primary question remains: “How do I get LiteFS installed into my docker container if I also need Node.js in there to install dependencies and run my app?”

Building upon the lifefs-example:

# Fetch the LiteFS binary using a multi-stage build.
FROM flyio/litefs:pr-109 AS litefs

# Our final Docker image stage starts here.
FROM node:alpine

COPY . .

# Install node deps, if needed
# RUN npm install

COPY --from=litefs /usr/local/bin/litefs /usr/local/bin/litefs

# Setup our environment to include FUSE & SQLite.
RUN apk add bash curl fuse sqlite

# Ensure mount & data dirs exists as req by LiteFS.
RUN mkdir -p /data /mnt/data

# Run LiteFS as the entrypoint so it can execute "litefs-example" as a subprocess.

ENTRYPOINT ["litefs"]

CMD ["npm", "run"]
# or: CMD ["node", "/path/to/index.js"]

Per cursory reading of the code, it looks like the litefs binary can be an entrypoint (code), as in, it can exec sub-commands. In our case, that sub-command is either npm run or node /path/to/index.js.

In Docker, when an entrypoint is left undefined, I believe, the default is /bin/sh. When both are defined entrypoint is in charge and execs the cmd (see).

1 Like

Good questions. I’ll try to clear them up here.

The litefs.yml file is required for configuration, however, the enable_consul is only required if you use the Consul leases in LiteFS and you don’t want to run your own Consul instance. You can use static leases and avoid this configuration.

fuse is added for fusermount but perhaps your base image already includes it. sqlite is added just so you can use the sqlite3 command line although a Node app may link to it as well. LiteFS itself doesn’t have a SQLite dependency.

litefs is the entrypoint so that it can start up before your application. The exec field in litefs.yml is what it will execute as a subprocess. It basically works as a simple process supervisor.

The alternative is to run litefs as a background job but your app will need to handle waiting for litefs to get set up in that instance. We’re also working on making litefs run as a sidecar so the set up is less confusing.

That totally makes sense. We’re working on making more of this transparent so you don’t have to configure as much but improving docs would help in the short term. All your feedback has been a ton of help tbh.

This is actually just to pull in the litefs binary from another Docker image. It’s not actually the base. By specifying "AS litefs" we can then reference it later in our Dockerfile to copy the binary into our final Docker image:

1 Like

Thank you for the explanation. I’m sorry for my lack of knowledge here. Thanks for your patience.

I think I understand what needs to be done. If you’re interested in taking a look at what I’ve got so far, please do: This is building now: whelp, litefs here we go · kentcdodds/kentcdodds.com@2a16104 · GitHub It’s all in the dev branch: GitHub - kentcdodds/kentcdodds.com at dev

Thanks, @kentcdodds. I’ll take a look. I also popped onto your YouTube stream and your Discord if you need a hand debugging.

Why not make the litefs binary support the CMD dockerfile directive (if it doesn’t already)? I imagine the []argv from CMD is passed into litefs:main as-is?

Yeah, that’s a good idea. I added an issue to track it. Support `exec` command from CLI arguments · Issue #133 · superfly/litefs · GitHub

1 Like

I implemented the exec flags on the CLI in this PR. It requires using a double dash and then listing the args afterward. I went with this approach instead of just using the full args list since litefs may allow commands in the future.

e.g.

# Execute "myapp -addr :3000" as a child process of LiteFS
$ litefs -config /path/to/lite.yml -- myapp -addr :3000
1 Like

So does this mean it’s better now to use CMD in Dockerfile than exec in the lite.yml? Which would you recommend?

Both will work the same, however, specifying in the subprocess command in the Dockerfile can make it easier to see what LiteFS is executing without having to dig into additional config files.

Another benefit to the CLI arg list is that it allows you to override if you’re trying to debug an issue and maybe don’t want to start your app temporarily or you want to change the args to your app temporarily.

As for CMD vs ENTRYPOINT, there’s not much difference. ENTRYPOINT is not typically overridable with Docker, however, Fly.io constructs a FirecrackerVM out of the Docker layers so we actually let you override either the entrypoint or cmd in the fly.toml.

2 Likes

There are some rather subtle differences (:

Is this a correct way to use entrypoint and cmd for litefs?

ENTRYPOINT ["litefs", "--"]
# or: ENTRYPOINT ["litefs", "-config", "/path/to/litefs.yml", "--"]
CMD ["npm", "run"]
# or: CMD ["myapp", "-addr", ":3000"]

1 Like

Yes, that will work to have it execute litefs -- npm run on startup. That’ll let you change the CMD args if you’re running Docker locally.

If you don’t need to change the child process invocation, then I’d just use one or the other since it’s a bit clearer:

ENTRYPOINT ["litefs", "--", "npm", "run"]
# or
CMD ["litefs", "--", "npm", "run"]

The CMD version might even be preferred since that means you can override it and not run litefs (e.g. if you’re running locally for dev):

$ docker run myimage npm run
2 Likes