Using Containers with Flyctl

Previously: Docker without Docker, now with containers - at the time support was limited the to the Machine API and you needed to request access. We quietly lifted the requirement to request access, and as of flyctl Release v0.3.113 · superfly/flyctl · GitHub , we now have fly machine run and fly deploy support.

There is a lot to explain, so a rate limiter demo was created that will walk you through what you need to know. If you have any questions, feel free to ask them here.

Below is the current contents of the README:


Container Demo

The following will demonstrate running two containers on a single machine:
a simple echo app and nginx configured to be a rate limiter. Nginx will require a configuration file and will depend on the echo app being healthy. The echo app will have a health check defined: running wget to verify that the server is up.

Each container has at a minimum a name. This will enable you to select the container on commands like fly ssh console, and be used to express dependencies. In this demo the names used will be nginx and echo.

Every container but one requires an image. This can be from places like Docker Hub, quay.io, or gcr.io. If you are using fly machine run or fly deploy, you can specify one container which will use the image that you build for your application.

The image is a starting point. You can override any or all of the following: cmd, env, exec, files, secrets. or user. This demo will add an nginx configuration file to the nginx base image on Docker Hub.

The demos below will show you how to do all this with the machines API, fly machine run, and fly launch. And will demonstrate running an existing echo server as well as one that you provide.

Step 0 - Setup

These instructions should work on Linux, MacOS, and Windows WSL2.

  • Verify that you have curl and flyctl installed, and can log into your fly.io account.

  • Create an app, a shared IPv4 address, a dedicated IPv6 address, a token, and set the fly API hostname.

    export APPNAME=container-demo-$(uuidgen | cut -d '-' -f 5 | tr A-Z a-z)
    fly apps create --name $APPNAME
    fly ips allocate-v4 --shared --app $APPNAME
    fly ips allocate-v6 --app $APPNAME
    export FLY_API_TOKEN=$(fly tokens create deploy --expiry 24h --app $APPNAME)
    export FLY_API_HOSTNAME=https://api.machines.dev
    
  • Destroy all machines in the app. There won’t be any at this point, but you will want to run this between every step:

    fly machines list --app $APPNAME -q | xargs -n 1 fly machine destroy -f --app $APPNAME
    
  • Optional, but recommended, try running the ealen/echo-server on your own machine using Docker (if you don’t already have Docker installed, you can find the instructions here):

    docker run -p 8080:80 ealen/echo-server
    

    VIsit http://localhost:8080/ in your browser. You will see some JSON.

Demo 1 - Machine API

In this demo we are going to run the pre-canned echo-server from the previous step on a fly.io Machine. Without modifying that server, we are going to also run nginx configured to be a rate limiter. We are going to configure our guest machine, and set up our HTTP services.

The JSON we will be sending is contained in api-config.json. It contains the definition of a machine.
We will be focusing mostly on the definition of a container.

We see two containers defined: nginx and echo. nginx depends on echo, and echo has a health check defined that will determine whether or not the container is ready to
accept requests. This is important to prevent requests being routed to the new Machine once it is started (or restarted) until it is ready to accept requests.

Note the raw_value contained in that file. It is the base 64 encoded contents of nginx.conf. You can produce this value yourself by running:

base64 -i nginx.conf

We can use curl to send the request:

curl -i -X POST \
  -H "Authorization: Bearer ${FLY_API_TOKEN}" -H "Content-Type: application/json" \
  "${FLY_API_HOSTNAME}/v1/apps/${APPNAME}/machines" \
  -d "$(cat api-config.json)"

… and you are done! That was quick.

Visit your application by running the following command:

fly apps open --app $APPNAME

Again, you will see JSON. Press refresh again rapidly and you will quickly see “503 Service Temporarily Unavailable”. Congratuations, you have successfully run both the echo app and nginx configured as a rate limiter on a Fly.io machine.

You can ssh into either container using fly ssh console:

fly ssh console --container nginx
fly ssh console --container echo

This demo used curl. Any application written in any language that can send HTTP POST requests can be used instead.

When done, delete your machine using the command in the setup section.

Demo 2(A) - fly machine run with precanned app

This time JSON configuration is a bit simpler: cli-config.json. That’s because we can load the contents of the nginix.conf file directly and we configure our guest machine and services from the command line:

flyctl machine run --machine-config cli-config.json \
  --app $APPNAME --autostart=true --autostop=stop \
  --port 80:8080/tcp:http --port 443:8080/tcp:http:tls \
  --vm-cpu-kind shared --vm-cpus 1 --vm-memory 256

Once again, visit your application by running the following command:

fly apps open --app $APPNAME

You may find the fly machine run command to be useful for casual experimentation and/or scripting.

When done, delete your machine using the command in the setup section.

Demo 2(B) - fly machine run with custom app

You typically won’t be running apps that are prepackaged as Docker images and published to Dockerhub. To demonstrate running your own app, server.js contains a small JavaScript application that performs a similar function. We also have a Dockerfile that runs this application.

We can use the exact same configuration from the previous step and replace the image in the echo container with the one produced by building this app by passing two additional parameters to the fly machine run command:

flyctl machine run --machine-config cli-config.json \
  --dockerfile Dockerfile --container echo \
  --autostart=true --autostop=stop \
  --port 80:8080/tcp:http --port 443:8080/tcp:http:tls \
  --vm-cpu-kind shared --vm-cpus 1 --vm-memory 256

The additional parameters are --dockerfile and --container. The Dockerfile is used to build an image which is pushed to a repository, and this image replaces the image defined in the echo container. The default for container is to look for a container named “app” first, and if not found use the first one.

Note that while we have been destroying machines and running new ones, we could instead opt to update an existing one:

fly machine list -q | xargs fly machine update --yes --dockerfile Dockerfile --container echo

When done, you can delete everything running the following command:

fly apps destroy $APPNAME

Demo 3 - fly launch

In this demo we are going to launch our bun server as a new application, then we will add the rate limiter. To make it easier to trigger the rate limiter later, we are going to opt out of running in a high availbility configuration:

fly launch --ha=false

At this point, we have a fly.toml that configures a http_service and our vm. We can visit the app, but at this point there is no rate limiters.

We can add our desired machine configuration to the fly.toml by adding the following two lines above the [build] section:

machine_config = 'cli-config.json'
container = 'echo'

Once again, you will not normally need to specify the container as fly deploy will first look for a container named app, and if none are found it will select the first app. In this case we want the image we build to replace the definition of the second app, named echo.

We make one further change, we change the internal port to ‘8080’ so that traffic will be routed to the http server:

  internal_port = 8080

Once this change is made, we run fly deploy. If we visit the app now we can quickly trigger the “503 Service Temporarily Unavailable” available message.

fly deploy may be more convenient than fly machine run when you are starting out, and can update multiple identically
configured machines with one command.

In the above we are using a cli-config.json in a separate file. You can also embed it directly into your fly.toml using triple quotes. Just make sure that the first character in the string is {:

machine_config = '''{
  "containers": [
    …
  ]
}'''

While these demos progressed from using the machine API directly to fly launch, a more common progression is in the other direction - you start out simple and as your needs change and you want to take greater advantage of what Fly.io has to offer you take advantage of the interface that is most suited to your needs.

9 Likes

This is excellent news!

I think this can be reduced to -d @api-config.json, as a small suggestion. (The at-sign is magic syntax in this context.)


Aside: Perhaps the community forum would benefit from a #containers or a #pilot tag, so people can find all these, :thought_balloon:

2 Likes

Readme updated: Use curl magic syntax. As suggested by mayailurus · fly-apps/rate-limiter-demo@dc91802 · GitHub

1 Like

Using machines as pods for containers is a killer-feature. Thank you!

Are machines running containers supposed to work with the suspend feature? Currently, when I suspend a machine which has 3 containers running, after waking it up, it hangs on state starting.

Will [env] variables defined in the fly.toml be available for internal containers, or should redefine them in the containers array ?

I have a scneário where some env could be shared between different containers.

Supposed to? Definitely. Do they? No.

I’ve reproduced and reported this problem. Thanks!

2 Likes

Good question.

My mental model at the moment is that fly.toml describes one app in the container, (i.e., the one you specify, or app if it is found, or the first one if not), and that information from arguments to fly machine run and the contents of fly.toml make their way into that one specific container.

But that’s not implemented yet; and I could be convinced that there is a better approach.

Testing with a couple of containers, it would be nice it logs were prefixed with the container name.

For example:

app: stating service 
nginx: listeninit on por :8080
...

Also, it would be nice to have messages from pilet with a pilot: prefix also, like we have for hallpass:¨ and api_proxy:.

I’m commenting here since I didn’t find the pilot repository.

3 Likes

It could be something similar to secrets.

If I understood it right, secrets if defined for an Fly App, and currently for the containers: [] you have to specify which secrets will be used and the env name it will use inside the container.

For me, we could have a simpler API where you define which ENV from the Fly App this container will use, and will have the same name. Maybe, an optional name could be defined so we can change.

All containers should not inherit any environment variable from App or other containers, but explicit ones only, and always from the App.

If I want an env variable only for a container, I will define it there.

I can already do this using secrets, but I’ll be abusing secrets for data that doesn’t need to be a secret and the syntax is too verbose.

Can you sketch out what you would like to see?

For context, flyctl is built on the machines API, so what it ultimately needs to send is a MachineConfig. How it constructs this structure is completely open. Changing what is in fly.toml is no problem. If possible, I’d like the definition of a container to be self-contained and as similar as possible to what is sent. So I would prefer something in fly.toml that says “this environment variable goes with that container” over something in the machine config that says “this environment variable comes from that value in fly.toml”.

Adding this config into fly.toml, I think we could follow the existing [processes] idea, where we define the containers name and its config.

An example for dealing with reusable env

...
[containers]

env =  {
 GLOBAL_ENV = 'value',
 OVERRIDABLE_ENV = 'global_value'
}

[containers.nginx]
image = 'nginx:latest'

env = {
  NGINX_ENV = 'nginx'
...

[containers.echo]
image = 'traefik/whoami:latest'

env = {
  OVERRIDABLE_ENV = 'echo_value'
}

So:

  • nginx will have : GLOBAL_ENV=value, OVERRIDABLE_ENV=global_value, NGINX_ENV=nginx
  • echo will have : GLOBAL_ENV=value, OVERRIDABLE_ENV=echo_value

Not sure if this is supported by TOML, or if this could be uses for every containers.* field or just for env.

The closest we can get to that structure with TOML would be something like this:

[containers.env]
GLOBAL_ENV = "value"
OVERRIDABLE_ENV = "global_value"

[containers.nginx]
image = "nginx:latest"

[containers.nginx.env]
NGINX_ENV = "nginx"

[containers.echo]
image = "traefik/whoami:latest"

[containers.echo.env]
OVERRIDABLE_ENV = "echo_value"

So the first problem is that env looks like a container name, so that would probably needs to be something like [containers.__global__.env]. Or perhaps we just use the existing [env] section. And for those who don’t care for TOML, we support JSON and YAML too.

While none of this is hard, once implemented it will be something we will need to support indefinitely. For this reason, I’d like to understand why, for example, you would want to be able to override your nginx image from within your TOML. Why wouldn’t you simply update your machine_config?

The idea was to use fly.toml only, without the need of a machine_config at all.

I don’t think it would be good to reference something from inside a JSON machine_config to a fly.toml thing, it starts to create unnecessary complexity, IMO.

I prefer everything inside fly.toml.

1 Like

I’d also like to be able to configure containers with fly.toml.
To my understanding, currently, I need to update the machine config using the Machines API if I want to run two containers on one fly machine. But if I do so and then I fly deploy my app running only this single machine, the config using containers is gone, right? Because the machine is reset to whatever is in fly.toml. Is that right?
I’d like to run my app in a container built from the local Dockerfile and have a sidecar from an imageon dockerhub. And it would be awesome if I could continue to update my app with fly deploy to achieve that.

you can use fly deploy with containers if you put the machine config directly into your fly.toml:

1 Like

I can’t use /.fly/oidc_token from a container.

[info]Error: operation error KMS: GetPublicKey, get identity: get credentials: failed to refresh cached credentials, failed to retrieve jwt from provide source, unable to read file at /.fly/oidc_token: open /.fly/oidc_token: no such file or directory

On fly.toml, I defined:

[env]
  AWS_ROLE_ARN = 'arn:aws:iam::__REDACTED__:role/my-role'

When connected to the main containers via fly ssh console, I can see other AWS_ variables that were defined by fly stack (I assume).

# env | grep AWS_
AWS_ROLE_ARN=arn:aws:iam::__REDACTED__:role/my-role
AWS_ROLE_SESSION_NAME=fly-containers-test@6e822357a75d08
AWS_WEB_IDENTITY_TOKEN_FILE=/.fly/oidc_token
# ls -lhga /.fly/oidc_token
ls: cannot access '/.fly/oidc_token': No such file or directory

Should I do anything special here ?

Tell me if I should open a new thread or keep using this one to report my findings.

So, I used to run multiple processes using s6-rc faking it was the PID1.

And to collect processes metrics, I just needed to run telegraf as a service alongside the other services I needed to run in the same Fly Machine.

Now that I’m using containers for multiple processes, I need to collect metrics, but this metrics collection relies on /proc, that’s being namespaced by cgroups and stuff from the OCI runtime.

How can I access the Fly Machine /proc instead of the container’s /proc ?

When using this in a docker compose scenario, I can do this:

  volumes:
   - /sys:/rootfs/sys:ro
   - /proc:/rootfs/proc:ro

Is there anything similar for Fly Machine containers?

expanding on the metrics topic:

I think you could use cAdvisor by default exposing metrics when using containers within Fly Machine

I need to ask a follow-up question here. If I want my machine(s) to have two containers, one from an image registry and the other one is to be built from the local dockerfile, how do I specify the image for the container that needs to be built from the dockerfile?

If you specify container = in your fly.toml, the image in that container will be replaced with the one you build.

If you don’t specify container = in your fly.toml, and you have a container named app, the image in that container will be replaced.

If you don’t specify either, the image in the first container will be replaced.

1 Like