I’m trying to follow the tutorial and the example includes some things that I’m not certain are necessary for LiteFS to be deployed properly. Can I get some help knowing each piece that’s required? So far, here’s what I’ve gathered:
/etc/litefs.yml
experimental.enable_consul = true set in fly.toml
I noticed in the litefs.yml in the example it has this line:
# The path to where the underlying volume mount is.
data-dir: "/mnt/data"
I’m not sure what that is and whether I need to configure something like that in my app other than ensuring that directory is created in my Dockerfile
I also noticed that the example Dockerfile adds Fuse and SQLite but I’m not sure whether those are required (my non LiteFS app runs fine without them currently).
Also the example Dockerfile lists ENTRYPOINT "litefs". Is this required? It seems the getting started guide leaves a lot to people to dig through the example to understand how things are supposed to be configured. This is especially challenging when trying to deploy a different language server
EDIT: I also noticed that the base image for the Docker image is:
# Fetch the LiteFS binary using a multi-stage build.
FROM flyio/litefs:pr-109 AS litefs
Currently mine is set to FROM node:18-bullseye-slim as base. I’m a Docker noob and at this point I have no idea to get LiteFS running with Node. It seems like the example needs to have LiteFS be the primary process and my app runs as a subprocess. I’m not sure how to accomplish this.
EDIT: jk, my dockerfile does add sqlite, and I just discovered that fuse is used for logging.
My primary question remains: “How do I get LiteFS installed into my docker container if I also need Node.js in there to install dependencies and run my app?”
My primary question remains: “How do I get LiteFS installed into my docker container if I also need Node.js in there to install dependencies and run my app?”
# Fetch the LiteFS binary using a multi-stage build.
FROM flyio/litefs:pr-109 AS litefs
# Our final Docker image stage starts here.
FROM node:alpine
COPY . .
# Install node deps, if needed
# RUN npm install
COPY --from=litefs /usr/local/bin/litefs /usr/local/bin/litefs
# Setup our environment to include FUSE & SQLite.
RUN apk add bash curl fuse sqlite
# Ensure mount & data dirs exists as req by LiteFS.
RUN mkdir -p /data /mnt/data
# Run LiteFS as the entrypoint so it can execute "litefs-example" as a subprocess.
ENTRYPOINT ["litefs"]
CMD ["npm", "run"]
# or: CMD ["node", "/path/to/index.js"]
Per cursory reading of the code, it looks like the litefs binary can be an entrypoint (code), as in, it can exec sub-commands. In our case, that sub-command is either npm run or node /path/to/index.js.
In Docker, when an entrypoint is left undefined, I believe, the default is /bin/sh. When both are defined entrypoint is in charge and execs the cmd (see).
The litefs.yml file is required for configuration, however, the enable_consul is only required if you use the Consul leases in LiteFS and you don’t want to run your own Consul instance. You can use static leases and avoid this configuration.
fuse is added for fusermount but perhaps your base image already includes it. sqlite is added just so you can use the sqlite3 command line although a Node app may link to it as well. LiteFS itself doesn’t have a SQLite dependency.
litefs is the entrypoint so that it can start up before your application. The exec field in litefs.yml is what it will execute as a subprocess. It basically works as a simple process supervisor.
The alternative is to run litefs as a background job but your app will need to handle waiting for litefs to get set up in that instance. We’re also working on making litefs run as a sidecar so the set up is less confusing.
That totally makes sense. We’re working on making more of this transparent so you don’t have to configure as much but improving docs would help in the short term. All your feedback has been a ton of help tbh.
This is actually just to pull in the litefs binary from another Docker image. It’s not actually the base. By specifying "AS litefs" we can then reference it later in our Dockerfile to copy the binary into our final Docker image:
Why not make the litefs binary support the CMD dockerfile directive (if it doesn’t already)? I imagine the []argv from CMD is passed into litefs:main as-is?
I implemented the exec flags on the CLI in this PR. It requires using a double dash and then listing the args afterward. I went with this approach instead of just using the full args list since litefs may allow commands in the future.
e.g.
# Execute "myapp -addr :3000" as a child process of LiteFS
$ litefs -config /path/to/lite.yml -- myapp -addr :3000
Both will work the same, however, specifying in the subprocess command in the Dockerfile can make it easier to see what LiteFS is executing without having to dig into additional config files.
Another benefit to the CLI arg list is that it allows you to override if you’re trying to debug an issue and maybe don’t want to start your app temporarily or you want to change the args to your app temporarily.
As for CMD vs ENTRYPOINT, there’s not much difference. ENTRYPOINT is not typically overridable with Docker, however, Fly.io constructs a FirecrackerVM out of the Docker layers so we actually let you override either the entrypoint or cmd in the fly.toml.