Running Reproducible Rust: A Fly and Nix Love Story

Today I want to accomplish a few things: not just share how to leverage a new and effective way to deploy the type of containers you might run on Fly, but (gently) introduce nix in this context in an immediately useful and approachable way.

By the end of this tutorial, we’ll have set up a nix development environment, used it to build our application, and taken it for a spin on Fly. More importantly, we’ll do so in a way that is as reproducible as you can reasonably get with modern software - nix and its inherent properties will ensure that your build process will kick out a reliably consistent Docker container. You might even know enough nix to take things further for your own projects.

If you've stumbled across nix in some other context in the past and found it too opaque, I want to reassure you by saying that this tutorial tries to keep the nix difficulty to a minimum. We're primarily interested in nix here to deliver results and not necessarily wax poetic about the virtues of declarative, functional package management.

A Quick Detour to “Why?”

I alluded to why you might bother with this whole enterprise in my second paragraph. Maybe you need more convincing to read these 4,000 words?

Let me describe a scenario: you’ve written an application in your language of choice. As any reasonable person would, you packaged your application in the most generally-applicable form - a Docker container. Through years of writing them, we’ve grown accustomed to what a Dockerfile is supposed to represent to us: a set of repeatable, “declarative” steps to create a deployment artifact insulated from its heterogeneous surroundings. Your development cycle may hand you a binary, but you build that binary in a container with well-defined packages against libraries you ship with it to ensure that your application doesn’t fall apart down the road when someone updates a shared library underneath its feet. A container with the world inside it is the price we pay for our software development sins.

Fast forward a few years (maybe even just a few months). Does your build process still hold up? Many times, it does. Other times, the tag you’ve pinned in your FROM stanza may point at a more update-to-date container image that has subtly different behaviors. My (least) favorite case is the “broken mirror” scenario: in the course of executing arbitrary apt-get commands, you may fail to fetch repository metadata from now-defunct distribution mirrors. Now you can’t build your deployment artifact at all! Was it all in vain? Will my therapist cover devops trauma?

As much as we tell ourselves that a Dockerfile is a tome of unchanging instructions, the reality is that a Dockerfile is often a pamphlet that gets a little smudged over time and becomes unusable down the road. The examples I’ve cited here are real (and sort of annoying), but they’re not the only quibbles. You may have your own. The point is, when the time comes to reassemble your software - whether for deploying or building or testing, what you really want is a reliable foundation that changes on your terms and when you want them to change.

The NixOS home page does a much better job than I can explaining how nix solves these kinds of problems. What I can add is that nix solves these problems generally, and that although nix has a (mostly accurate) reputation for its steep learning curve, there’s a reason why people continue to pay the learning tax to obtain its promises. Will you write some foreign-looking language in .nix files? Yep. But you’ll gain an asset in your toolbelt that isn’t just another trick in your dotfiles, but a whole new way to solve a host of different problems.

Also, you’ll probably never have to deal with obscure openssl shared library problems again if you build with nix. That’s reason enough for me!

Prerequisites

  • One of the boons when using nix is that the only real prerequisite is to install nix per the documentation. We’ll get the bulk of what we need from nix itself. You can even get flyctl from nix!
    • You probably need to follow these instructions to get a flake-enabled nix operational. The documentation notes that you need an unstable or latest version of nix that understands flakes, but as of this writing, the latest stable nix satisfies that requirement. You likely only need to follow the steps that instruct you do add some lines to ~/.config/nix/nix.conf. Once nix help flake works, you should be good to go. If the “NixOS” section the documentation applies to you; hail, comrade, you know what to do.
  • We’re deploying container images, so you should have a functioning Docker installation. I’d love to sandbox this as well, but Docker is a little too low-level, so having it up and ready is something you’ll have to setup on your host operating system of choice.
  • If you really want to supercharge your nix usage, you might want to install and use direnv. It eliminates a number of manual steps and is a quality-of-life improvement that is usually worth the effort.
  • You probably want a Fly account if you’d like to follow this tutorial to its conclusion (deploying your container image).

A Brave New Repository

We’ll start at absolute zero. Remember, at this point, I assume that you have a functional, flake-enabled nix present in your shell.

mkdir flynix
cd flynix
git init

The git initialization is typically just good hygiene, but in the case of a nix flake it’s actually a requirement. Your flake won’t be able to “see” files unless git knows about them, so this part is important (and you’ll run into confusing errors if you forget this step).

A clean slate; an unblemished directory inode completely unaware of our plans. Whereas you might start down the path of installing your language compiler or package manager of choice to start fetching dependencies and building, we’re going to bootstrap what we call a devshell in which we can declaratively manage and use the bits we’ll need using nix. A devshell is just a fancy term for hopping into a shell environment that nix has set up for us, such as adding the right programs we ask for to $PATH.

Here, then, is what you should place in a file called flake.nix. There’s a lot here, but you don’t need to hand-edit any of it, and we’ll walk through a few parts to make it less onerous. If it makes you go glassy-eyed, skip reading it and trust that it works; I’d rather you see the conclusion of the exercise rather than parse it all out line-by-line.

{
  # Credit for starting base for this flake:
  # https://www.srid.ca/rust-nix
  #
  # You can call your flake project anything you like.
  description = "My Fly project";

  # Each input is a (potential) flake dependecy that nix will fetch and
  # make available for use as an argument later.
  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs?rev=6d8215281b2f87a5af9ed7425a26ac575da0438f";
    utils.url = "github:numtide/flake-utils?rev=bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4";
    rust-overlay.url = "github:oxalica/rust-overlay?rev=85dcf1a4e4897db4420f2c0a3eaf7bb4693914bc";
    devshell.url = "github:numtide/devshell?rev=696acc29668b644df1740b69e1601119bf6da83b";
    crate2nix = {
      url = "github:kolloch/crate2nix?rev=d9854e53b5f17dc2a7fb5e8c7a32bb2299bd0a0a";
      flake = false;
    };
  };

  # As promised, each "input" comes back here as an argument.
  outputs = { self, nixpkgs, utils, rust-overlay, devshell, crate2nix, ... }:
    # "flynix" is another arbitrary name you can choose.
    let name = "flynix"; in
    # This is a minor tool to help define everything else that follows
    # various "systems" that Nix can build for, like Linux, OS X, etc.
    utils.lib.eachDefaultSystem (system:
      let
        # "pkgs" will be the nixpkgs repository we call later on, but with
        # some "overlays" put on top.
        pkgs = import nixpkgs {
          inherit system;
          overlays = [
            rust-overlay.overlay devshell.overlay
            (self: super: {
              rustc = self.rust-bin.stable.latest.default;
              cargo = self.rust-bin.stable.latest.default;
            })
          ];
        };
        # This just pulls in a function we need.
        inherit (import "${crate2nix}/tools.nix" { inherit pkgs; })
          generatedCargoNix;

        # From here on out, "project" is a value we can use to reference
        # our rust cargo project.
        project = import
          (generatedCargoNix {
            inherit name;
            src = ./.;
          }) { inherit pkgs; };
      in
        rec {
          # "packages" is an attribute that we can define for our
          # "output" to pull out our built rust package.
          packages.${name} = project.rootCrate.build;
          defaultPackage = packages.${name};
          # ...and "container" if we want to ask for the Docker image instead.
          packages.container = pkgs.dockerTools.buildImage {
            inherit name;
            tag = packages.${name}.version;
            created = "now";
            contents = packages.${name};
            config.Cmd = [ "${packages.${name}}/bin/flynix" ];
          };
          # "apps" sort of behave like "packages", but can be invoked with
          # "nix run"
          apps.${name} = utils.lib.mkApp {
            inherit name;
            drv = packages.${name};
          };
          defaultApp = apps.${name};

          # This is what nix points at when we run "nix develop"
          devShell = pkgs.devshell.mkShell {
            imports = [ (pkgs.devshell.importTOML ./devshell.toml) ];
            env = [
              {
                name = "RUST_SRC_PATH";
                value = "${pkgs.rust.packages.stable.rustPlatform.rustLibSrc}";
              }
            ];
          };
        }
    );
}

We’re already dealing with a great deal! We’ll learn enough to understand without going overboard.

First of all, using flake.nix is a relatively new pattern in nix-land but well worth the venture into bleeding-edge territory (at least in my opinion). In the old days, your “repository” of known nix packages was defined somewhere on your system. With a “flake”, your inputs and outputs are clearly defined and self-contained. In concise terms, this is a nix “program” (technically, an “attribute set”) that has a description about what it does, inputs defining where it gets its dependencies from and outputs that make it do stuff with those inputs. That’s it!

Note that the form rev?<hash> in the inputs stanza is actually pinning those dependencies to a hard revision, so the versions of these dependencies that you’ll get are the exact same as mine. It ensures that this tutorial will work a for a long time, but if you want to ensure that you’re using the latest and greatest, you’re free to delete everything after the ? and let nix fetch the latest revisions for those repositories (although there’s less of a guarantee that everything will work as expected).

The nix language itself is the next novelty. Although the analogy isn’t completely accurate, you can sort of think of the nix language as a beefed-up variant of json. Whereas you might write this in json:

{
    "key": "value",
    "array": [1, 2, 3]
}

A similar nix file may read:

{
  key = "value";
  array = [1 2 3];
}

Similar, but not identical. Additionally, nix files can define functions, and those look really different. Here’s a function that accepts an argument called pkgs and returns an attribute set pointing foobar at the attribute bash that pkgs has:

{ pkgs }: { foobar = pkgs.bash; }

Anyway. We won’t have to write novel nix code for this tutorial, so this diatribe is mostly meant to make nix itself less alien.

With your flake.nix file present, create the actual devshell.toml file that our flake.nix references. With this configuration file, we won’t actually need to muck around in flake.nix, but rather make changes and updates to the less complicated devshell.toml file.

[devshell]
packages = [
  "rustc", "cargo", "cargo-edit", "flyctl"
]

The next part is key: the nix flake system is only aware of files that git knows about. Before proceeding, add the files we’ve been working with to the git index. (nix just needs to know the files are there, we don’t need to re-add any changes to the files we’ll be updating and working with)

git add flake.nix devshell.toml

At this point, you can enter your new devshell using nix develop. Cool! Please note that if you do this now, nix will fetch and start assembling your devshell, which may take some time. After all, it’s constructing a fully-defined chain of dependencies from glibc on up to ensure that your programs behave as expected. On my system this retrieves about ~250MiB of data and takes a few minutes. You can exit the red-prompt devshell the normal way you’d leave a shell (that is, exit or Ctrl-d).

Remember when I mentioned direnv? It actually natively understands how to run within a nix-based environment. While you can use nix develop to hop in and out of your sandboxed nix environment, you can save some steps (and no longer need to remember invoking nix develop) when you enter this repository. All it takes is a very short .envrc file in your repository directory:

use flake
watch_file devshell.toml

Note: We add watch_file to prompt re-running nix develop since direnv doesn’t know that devshell.toml is fed into the flake. Otherwise we’d need to touch flake.nix to offer direnv a hint to reload the environment if we added packages to devshell.toml.

Because direnv is cautious, tell it that you trust the .envrc file you just made.

direnv allow

If you previously ran nix develop, you should quickly enter the devshell. If you’re starting with direnv, you’ll have to wait while nix builds the environment for you. Now whenever you enter this flynix directory, your shell will automatically enter your tightly controlled sandbox environment. As a bonus, direnv will also make you feel more at home by providing a native shell prompt without overriding it with the big, bold, red prompt.

One note: in order to “pin” this flake in perpetuity, you should also commit your flake.lock file. Do so now and save your progress in git, and ignore the transitory .direnv directory if you’re using direnv, while you’re at it.

echo .direnv > .gitignore
git add flake.lock .gitignore
git commit -m 'Initial commit'

Assembling Our Tools

Per the contents of our devshell.toml file, we have sandboxed versions of flyctl and Rust tools available. If you try which flyctl or which cargo, you’ll see nix paths where these executables are stored. Let’s get started!

I’m asking you to sally forth into unknown nix territory, so it’s only fair that I do the same. We’ll walk through deploying a Rust application (which I have little experience in) which fits well since there isn’t explicit documentation about deploying Rust apps in the Fly docs (at least, as of the time of this writing).

Let’s get right into it. Within your flynix directory, initiate a new Rust project with your sandboxed tools.

cargo init

cargo will setup our Cargo.toml and a simple main.rs to get started. Next, let’s ask cargo to setup its lock file so that we’re ready to build against pinned dependencies.

cargo update

Don’t forget to stage these files; otherwise nix flake won’t know about them.

git add Cargo.toml Cargo.lock src .gitignore

Pardon The Interruption

I need to take a quick detour here to talk about build tooling.

In a typical situation, this is where you can start invoking build tools to compile, run, and use your program. That’s well and good, but we’re primarily focused on handing off these responsibilities to nix so that it can wrap its arms around dependencies, the build output, everything - so that we can use those outputs for other things as well.

That means that you’re more than welcome to cargo run at this point - and you’ll probably get the boilerplate output of Hello, world! - but we’ll use different commands when assembling the bits and pieces for our deployed artifact.

Why? Even though cargo will helpfully drop compiled executables into target/, there’s the possibility that you might have sneaky dependencies to shared libraries on your system that are the root of “works on my machine” problems if you ship this executable somewhere else. It’s why a lot of us use Dockerfiles to eventually wrap up executables - because the build environment is (sort of) totally defined and we can then take that big blob of container layers and run them somewhere.

However, per the start of this guide, we have some complaints about using Dockerfiles! That’s where asking nix to be our build system, rather than vanilla cargo or docker build, comes in useful.

Note: Interestingly, you may even have weird errors running cargo run if my environment differs from yours. In my case, my NixOS system is pretty threadbare well-sandboxed, so cargo fails with errors about missing a linker. Technically you (or I) could solve this by adding "gcc" to the list of packages in devshell.toml, if you really need to, but we’re moving forward with the assumption that we can to let nix ... commands do the dirty work.

Regularly Scheduled Programming Building

With all that said, if we want to see what nix thinks about answering the question “please build this?”, we can give it a go with nix run.

Without going into overly-painful detail, nix will rely on its typical, hermetically-sealed build system for most dependencies, but the line

project = import
  (generatedCargoNix {
    inherit name;
    src = ./.;
  }) { inherit pkgs; };

…in our flake.nix is what slurps up our Cargo.toml and Cargo.lock. Once nix knows what dependencies we want, it can piece together the rust compiler and drop in the dependencies it needs to build our project. Aside from the typical packages that nix ships with, one of our overlays is what appends all of our Rust libraries that cargo may ask for into our nix environment. Suddenly, aside from just the typical packages like flyctl and cargo, nix knows about rust cargo packages as well.

If you’re curious, that’s what this line is doing:

pkgs = import nixpkgs {
  inherit system;
  overlays = [
    rust-overlay.overlay devshell.overlay
    (self: super: {
      rustc = self.rust-bin.stable.latest.default;
      cargo = self.rust-bin.stable.latest.default;
    })
  ];
};

Phew! Anyway, after a nix run, you should see this:

Hello, world!

Cool! Although it’s not much different than cargo run right now, remember that you can ship this repository over to someone else - on OX X, or Ubuntu, or Fedora - and they can get the same results, regardless of what software or libraries they have on their system. Nix’s sandbox has provided all of it.

Hello World Demos Are Boring, Please Move On To The Good Stuff

Next, we’ll make our program more interesting and get it up on Fly.

First, we’re going to install some dependencies. We’ll be writing a simple web application using the poem Rust library.

cargo add poem
cargo add tokio --features full
cargo update

Remember that we’re interacting with native Rust tooling right now, but our flake.nix will consume our cargo files, so our list of dependencies and build system are in sync.

Next, replace the old main.rs with this new one. It’s a simple web service that’ll echo back part of the URI path.

use poem::{
    get, handler, listener::TcpListener, web::Path, EndpointExt, Route, Server,
};

#[handler]
fn hello(Path(name): Path<String>) -> String {
    format!("Hello, {}!", name)
}

#[tokio::main]
async fn main() -> Result<(), std::io::Error> {
    std::env::set_var("RUST_LOG", "poem=debug");

    let app = Route::new().at("/hello/:name", get(hello));
    Server::new(TcpListener::bind("0.0.0.0:4000"))
        .name("hello-world")
        .run(app)
        .await
}

Try asking nix to run your program again:

nix run

And you won’t see much (after the build completes), but try sending port :4000 some requests.

$ curl http://localhost:4000/hello/kurt
Hello, kurt!

Cool! We now have a REST-ish Rust project that we can build with nix.

Deployment

Remember this line in flake.nix?

packages.container = pkgs.dockerTools.buildImage {
  inherit name;
  tag = packages.${name}.version;
  created = "now";
  contents = packages.${name};
  config.Cmd = [ "${packages.${name}}/bin/flynix" ];
};

One of the “targets” that we can build with our flake is what we’ve called container. The function pkgs.dockerTools.buildImage accepts a few attributes that tell nix how to build a container, and we feed it executables via the contents attribute. That means that when we build this containers thing, nix will assemble a container image with packages.${name} inside of it - our flynix app - and perform the Dockerfile equivalent of CMD /usr/bin/flynix for us as well.

Try it now!

nix build '.#container'

Note: This is flake syntax - the # is asking nix to build a particular application or package from the flake’s list of outputs. So, you can think of this command as asking, “please build the container output from the flake.nix in this directory”. The argument is within single quotes because some shells (like my zsh) interpret # as a comment, which makes the argument disappear. I’m not sure whether bash will do this.

Your command will run for a little while, then exit. What happened? Try the following. result is the name that nix gives for things you ask it to spit out. (for example, if you run nix build, you’ll end up with result again, but this time, with your cargo artifact in ./result/bin)

file result

It’s a link to a Docker image file! We can ask docker to load it:

docker load < result

And we’ll see this:

Loaded image: flynix:0.1.0

You can now run it and watch it work the same way:

docker run --rm -it -p 4000:4000 flynix:0.1.0

Send requests to port :4000 like http://localhost:4000/hello/cthulhu before and watch it work. We have a working container that nix built for us!

Take It For a Fly

You might have read the builders section of the Fly docs and wondered: the instructions indicate the need for buildpack support or a Dockerfile in the working directory. Can we deploy this image that nix has built?

Fortunately this works. All that’s necessary is for us to set the image to a container image that exists locally, and flyctl is happy to pick it up and deploy it. This offers a clean hand-off point between our build process and the deploy process. Here’s what a potential fly.toml file might look like. Assuming that you’ve setup your Fly account and walked through this tutorial, you can drop this content into fly.toml in the working directory (give your app a unique name so there isn’t a name collision with someone else following this tutorial):

app = "<something unique>"

[build]
image = "flynix:0.1.0"

[[services]]
internal_port = 4000
protocol = "tcp"

[[services.ports]]
handlers = ["http", "proxy_proto"]
port = 80

…create the application (you can remove it once you’re done)…

flyctl create <something unique>

…and deploy.

flyctl deploy

Congratulations! You should be able to run something like the following in order to see the application serve requests.

curl http://<something unique>.fly.dev/hello/beefcake

Hello, beefcake!

If we were to take this application further, you can write more code, rebuild the container image, reload it into docker, and deploy the new image tag. As you can tell, those are a few different steps, but can be tied together with something like some simple make targets if you’d like to make your life easier.

And that’s it! Aside from the inherent reproducibility that I won’t shut up about, the other interesting trait that you might note is that the decompressed container image file is actually pretty small. Chalk it up to nix knowing the entire graph of dependencies leading up to your built artifact - because nix knows what to include (and what to leave out) in order for a functional container, it just wraps up what you need. This makes the image small, but when I say bare-bones, I mean bare-bones - not even coreutils (ls) or bash is present in the built container. You can easily rectify this by adding the packages you want to packages.container, such as:

contents = [ pkgs.bash pkgs.coreutils packages.${name} ];

Coda

If this build method is appealing to you, there are a few things that you should bear in mind:

  • If nothing else, you can use the combination of direnv / flake.nix / devshell.toml to easily manage sandboxed tools. Just add more packages (that you can find with commands like nix search nixpkgs cargo-edit) to your devshell.toml, and they’ll appear in your shell. Here, I’ll even provide a bare-bones devshell.toml-usable flake.nix.

    {
      description = "My project";
      inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
      inputs.flake-utils.url = "github:numtide/flake-utils";
      inputs.devshell.url = "github:numtide/devshell";
    
      outputs = { self, nixpkgs, flake-utils, devshell }:
        flake-utils.lib.eachDefaultSystem (system: let
          pkgs = import nixpkgs {
            inherit system;
            config = { allowUnfree = true; };
            overlays = [ devshell.overlay ];
          };
        in {
          devShell = pkgs.devshell.mkShell {
            imports = [ (pkgs.devshell.importTOML ./devshell.toml) ];
          };
        });
    }
    
  • If you’re on a different language, there are a wide array of project analogous to crate2nix that know how to consume your language-native project/dependency files and feed them into a nix build, for example, go.

  • The combination of direnv and a sandboxed nix environment is so generally useful that many editors actually have plugins or features to support sandboxed editing of files that have adjacent .envrc, flake.nix, or shell.nix files. This is particularly useful if, for example, you’re in an LSP-enabled editor and need the right language server in your $PATH - just add it to the list of packages in devshell.toml and it’ll show up.

Conclusion

I hope this was helpful! Because this long, drawn-out tutorial is on the Fly forums, we’re in a particularly well-suited place for questions and answers, if you’d like to pose any directly. I’d love to answer them.

Finally, cheers to @kurt for asking for community content as it relates to deploying on Fly, and Graham Christensen for starting the conversation on Twitter.

16 Likes

I just wanted to let you know, I’ve now read half of this amazing post. :smiley:

2 Likes

Fly could let folks submit “built with Fly” posts with Thomas as the lead editor. Though, I’d imagine Thomas isn’t really that scalable to handle the barrage, but hey we have got to start somewhere

Thanks for this excellent guide! I have found one really cool addition to this deep in the archives of this forum: Deploy an OCI image? - #3 by nagisa - uploading the image with skopeo lets you push the container image to fly without importing it into a local docker (or podman) - without even running a container runtime at all. It’s a great time saver (takes around 20s to import my images on github workers), and feels a lot cleaner.

1 Like