Multi-Container Machine Mounts

Is it possible to expose a volume mounted to a machine to a container when using multi-container machines? I took a look through the API spec and there doesn’t appear to be anything.

yes. I’m not sure why it isn’t showing up on docs, but you can use the mounts key in the container entry: fly-go/machine_types.go at main · superfly/fly-go · GitHub

Using the following config:

{
  "containers": [
    {
      "env": {
        "PORT": "80",
        "TINI_SUBREAPER": "1"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/health",
            "port":  80,
            "scheme": "http"
          },
          "kind": "readiness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798",
      "name": "actual",
      "mounts": [
        {
          "volume": "actual_data_test2",
          "name": "actual_data_test2",
          "path": "/data"
        }
      ]
    },
    {
      "depends_on": [
        {
          "condition": "healthy",
          "name": "actual"
        }
      ],
      "env": {
        "ACTUAL_SERVER_URL": "http://localhost:80"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/api-docs/",
            "port":  5007,
            "scheme": "http"
          },
          "kind": "liveness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/jhonderson/actual-http-api:25.9.0@sha256:af818827f202f67b1b5141ccad588ae3fcefc19a25f17401dd47e5e58f67cc4d",
      "name": "http-api",
      "secrets": [
        {
          "name": "server_password",
          "env_var": "ACTUAL_SERVER_PASSWORD"
        },
        {
          "name": "http_api_key",
          "env_var": "API_KEY"
        }
      ],
      "mounts": [
        {
          "volume": "http_api_data",
          "name": "http_api_data",
          "path": "/data"
        }
      ]
    }
  ],
  "guest": {
    "cpus": 1,
    "cpu_kind": "shared",
    "memory_mb": 256
  },
  "services": [
    {
      "autostart": true,
      "autostop": "stop",
      "checks":  [
        {
          "interval": "10s",
          "grace_period": "10s",
          "method": "get",
          "path": "/health",
          "protocol": "http",
          "timeout": "2s",
          "type": "http"
        }
      ],
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 80,
      "ports": [
        {
          "port": 80,
          "handlers": ["http"],
          "force_https": true
        },
        {
          "port": 443,
          "handlers": ["tls", "http"],
          "tls_options": {
            "alpn": ["http/1.1"],
            "versions": ["TLSv1.2", "TLSv1.3"]
          }
        }
      ],
      "protocol": "tcp"
    },
    {
      "autostart": true,
      "autostop": "stop",
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 5007,
      "ports": [
        {
          "port": 5007,
          "handlers": ["tls", "http"],
          "tls_options": {
            "alpn": ["http/1.1"],
            "versions": ["TLSv1.2", "TLSv1.3"]
          }
        }
      ],
      "protocol": "tcp"
    }
  ],
  "volumes": [
    {
      "name": "actual_data_test2"
    },
    {
      "name": "http_api_data",
      "temp_dir": {
        "storage_type": "disk",
        "size_mb": 1024
      }
    }
  ]
}

I receive the following error:

❯ fly machine create --machine-config cli-config.json docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798 --region yyz
Searching for image 'docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798' remotely...
image found: img_8rlxp25jyeknp3jq
Image: docker-hub-mirror.fly.io/actualbudget/actual-server:25.9.0-alpine@sha256:63f8314f54f599e7798019629f3e09772e6fd4557427d398c630f894ae64ff11
Image size: 49 MB

Error: could not launch machine: failed to launch VM: container has mount actual_data_test2 that is not specified as a volume in machine configuration (Request ID: 01K54PJAQXMHTHKERY9FERRJTE-yyz)

However, if I specify the additional temp_dir parameters for the actual_data_test2 volume, it results in the data on that volume being tied to the lifecycle of the Fly machine, not the underlying physical host.

@dwsr can you try specifying the volume id instead of the name?

Any updates? Could you make it work? I am also interested in sharing machine directories or volumes between containers

Specifying the volume ID instead of the name changes the error but still does not work. Given the following config:

{
  "containers": [
    {
      "env": {
        "PORT": "80",
        "TINI_SUBREAPER": "1"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/health",
            "port":  80,
            "scheme": "http"
          },
          "kind": "readiness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798",
      "name": "actual",
      "mounts": [
        {
          "volume": "vol_re81qgedyq0373dr",
          "name": "vol_re81qgedyq0373dr",
          "path": "/data"
        }
      ]
    },
    {
      "depends_on": [
        {
          "condition": "healthy",
          "name": "actual"
        }
      ],
      "env": {
        "ACTUAL_SERVER_URL": "http://localhost:80"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/api-docs/",
            "port":  5007,
            "scheme": "http"
          },
          "kind": "liveness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/jhonderson/actual-http-api:25.9.0@sha256:af818827f202f67b1b5141ccad588ae3fcefc19a25f17401dd47e5e58f67cc4d",
      "name": "http-api",
      "secrets": [
        {
          "name": "server_password",
          "env_var": "ACTUAL_SERVER_PASSWORD"
        },
        {
          "name": "http_api_key",
          "env_var": "API_KEY"
        }
      ],
      "mounts": [
        {
          "volume": "http_api_data",
          "name": "http_api_data",
          "path": "/data"
        }
      ]
    }
  ],
  "guest": {
    "cpus": 1,
    "cpu_kind": "shared",
    "memory_mb": 256
  },
  "services": [
    {
      "autostart": true,
      "autostop": "stop",
      "checks":  [
        {
          "interval": "10s",
          "grace_period": "10s",
          "method": "get",
          "path": "/health",
          "protocol": "http",
          "timeout": "2s",
          "type": "http"
        }
      ],
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 80,
      "ports": [
        {
          "port": 80,
          "handlers": ["http"],
          "force_https": true
        },
        {
          "port": 443,
          "handlers": ["tls", "http"],
          "tls_options": {
            "alpn": ["http/1.1"],
            "versions": ["TLSv1.2", "TLSv1.3"]
          }
        }
      ],
      "protocol": "tcp"
    },
    {
      "autostart": true,
      "autostop": "stop",
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 5007,
      "ports": [
        {
          "port": 5007,
          "handlers": ["tls", "http"],
          "tls_options": {
            "alpn": ["http/1.1"],
            "versions": ["TLSv1.2", "TLSv1.3"]
          }
        }
      ],
      "protocol": "tcp"
    }
  ],
  "volumes": [
    {
      "name": "vol_re81qgedyq0373dr"
    },
    {
      "name": "http_api_data",
      "temp_dir": {
        "storage_type": "disk",
        "size_mb": 1024
      }
    }
  ]
}

I get this error:

❯ deploy
Secrets have been staged, but not set on VMs. Deploy or update machines in this app for the secrets to take effect.
Secrets have been staged, but not set on VMs. Deploy or update machines in this app for the secrets to take effect.
==> Verifying app config
Validating fly.yaml
✓ Configuration is valid
--> Verified app config
==> Building image
Searching for image 'docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798' remotely...
image found: img_8rlxp25jyeknp3jq

Watch your deployment at https://fly.io/apps/mc-actual-test/monitoring

This deployment will:
 * create 1 "app" machine

No machines in group app, launching a new machine

-------
 ✖ Failed: error creating a new machine: failed to launch VM: container has mount vol_re81qgedyq0373dr that is not specified as a volume in machine configuration

I think the OP is just looking to mount volumes in a multi-container set-up, but it looks like what you want is different: sharing volumes between containers is not possible, at least natively. This was covered elsewhere in this forum recently: volumes are mapped to a slice of an NVMe drive on the host, and are not exposed via NFS.

However you can set up something like Tigris and then map it to a FUSE style mount. Or I am sure you could create a container that exposes an NFS listener, and then that effectively becomes a network share on one volume (but you’d need to account for host failure in your application design).

Correct, I’m looking to mount at least 1 Fly volume to a Multi-Container machine and so far no luck.

1 Like

I would be great if fly.io offered a managed solution for this — even if they have to use buckets and POSIX or something

Yeah, thanks for pointing that out. I think my question was misleading and not clear enough. What i mainly wanna do is share a directory on the host machine and use it in the two (or multiple) containers at the same time. Since the containers are running on the same (virtual) machine that should be possible right? Do anyone of you happen to know how you can configure that, or if thats possible at all?

If thats not possible, the next best thing WOULD be a shared volume, which is not yet supported, and the next best thing after that would be indeed sharing a Tigris bucket as you already pointed out.

I mean, if its not possible to share a host machines directory (no matter if its a mounted volume or an local machine directory), then i question how valid the different use-cases pointed out here (e.g. Storage and Sync Layers) are? Isnt then the only possible way to provide decoupled functionality in the “Sidecar” by providing a http/tcp service and access it on localhost?

Through more thorough investigation, I believe Fly volumes can only be mounted into a single container on a machine, as of right now. Even though the mounts key exists in the schema for each container definition, the underlying platform doesn’t support attaching the same volume across multiple containers inside a multi-container machine. The only ways around this that I can think of are to:

  1. Run those processes inside the same container so they share the mount.
  2. Or run a sidecar/container that has the mount and expose its data over localhost to your other containers.
  3. Or separate them into multiple machines, each with its own attached volume.

Hi kaelynH,

Thanks for letting us know it’s possible, but can you confirm that it’s possible to do this in a multi-container setup and what an example configuration file might look like? As posted above, I am only attempting to mount a Fly volume into one of the two containers and yet I’m unable to do so using either the volume name or the volume ID.

Yep, that was my understanding of your requirement.

Do we know that containers spun up in “Fly Compose” will all land on the same host? I don’t think that limitation would make much sense in a Fly context; it would be better for containers to be pinned at the region level (as machines are) and then bin-packed to the most appropriate host that has space.

Earlier in this thread I had assumed it was not possible, as machines definitely have that limitation. I think @kaelynH has confirmed above that containers have this limitation too.

From their documentation (Multi-container Machines · Fly Docs):

Fly Machines support running multiple containers per virtual machine using the containers array.

and

Security Considerations

Containers in a Machine share the same kernel and VM, but are isolated at the process and filesystem level. Failures in one container won’t directly crash others, but they don’t provide the same level of isolation as across VMs.

Also, the “one container per machine” is basically what fly apps are already, so it wouldnt make sense imo to have another less capable and more complicated way of deploying to fly.

But the more I read and experiment, the more I believe that the current features of the “Sidecars” capability are quite limited in usefulness, in my opinion. I also feel that the existing documentation does not accurately reflect the reality of this feature. For instance, how is the example of using LiteFS in a sidecar helpful if I can’t share that path at the filesystem level on the host machine with my actual app?

1 Like

Oh, super; I didn’t spot that documentation had landed for this feature. Moreover, I apologise; I think you’re right. It sound like machines (Firecracker VMs) are the “holder” for a Docker Compose set of containers, and if so, they’d be guaranteed to run on the same host.

(If my reading of that wording is correct, then your machine memory defines the total amount of memory to all containers within this config, and of course there would need to be some overhead for the VM kernel to run.)

Given the advice from @kaelynH, I’d probably reiterate my earlier suggestion: have one container, attach a volume to it, install NFS on that container, and then mount that NFS in other containers using FUSE. To be fair, you could do this with traditional machines too, though I suppose it will be faster in same-host containers.

@kaelynH bump on this

Thanks for letting us know it’s possible, but can you confirm that it’s possible to do this in a multi-container setup and what an example configuration file might look like? As posted above, I am only attempting to mount a Fly volume into one of the two containers and yet I’m unable to do so using either the volume name or the volume ID.

I am able to mount a volume to a machine with containers with this config:

{
	"region": "yyz",
	"config": {
		"image": "alpine:latest",
		"guest": {
			"cpu_kind": "shared",
			"cpus": 1,
			"memory_mb": 256
		},
		"containers": [
			{
				"name": "first",
				"image": "alpine:latest",
				"cmd": ["/bin/sleep", "inf"],
				"mounts": [
					{
						"name": "shared",
						"path": "/cache"
					}
				]
			},
			{
				"name": "second",
				"image": "alpine:latest",
				"cmd": ["/bin/sleep", "inf"],
				"mounts": [
					{
						"name": "shared",
						"path": "/cache"
					}
				]
			}
		],
		"volumes": [
			{
				"name": "shared",
				"temp_dir": {
					"size_mb": 1024
				}
			}
		],
		"mounts": [
			{
				"name": "myvolume",
				"volume": "vol_vgjlo9y0py5d251v",
				"path": "/persistent"
			}
		]
	}
}

however this mounts the volume to all containers, which is probably not intended. I’ll investigate further (might be a bug / missing feature?)

2 Likes

Thanks that works for me too, although i am not sure why :smiley: . It is required to specify “volumes”, but its actually not required to specify an actual persistent volume. I could use this machine_config for instance:


{
  "containers": [
    {
      "name": "nginx",
      "image": "nginx:latest",
      "files": [
        {
          "guest_path": "/etc/nginx/conf.d/default.conf",
          "local_path": "nginx.conf"
        }
      ],
      "depends_on": [
        {
          "name": "myapp",
          "condition": "healthy"
        }
      ],
      "mounts": [
        {
          "name": "foo",
          "path": "/foo"
        }
      ]
    },
    {
      "name": "myapp",
      "image": "ealen/echo-server",
      "healthchecks": [
        {
          "exec": {
            "command": [
              "true"
            ]
          }
        }
      ],
      "mounts": [
        {
          "name": "foo",
          "path": "/foo"
        }
      ]
    }
  ],
  "volumes": [
    {
      "name": "foo"
    }
  ]
}

mount and df in the myapp container shows that its mounting the “Root FS” volume:

Can anyone direct me to the (machine or any) API documentation that explains this behavior? I want to avoid using anything that might stop working tomorrow because it was never intended to function that way.

Still having challenges with this. Given this config:

{
  "containers": [
    {
      "env": {
        "PORT": "80",
        "TINI_SUBREAPER": "1"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/health",
            "port": 80,
            "scheme": "http"
          },
          "kind": "readiness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/actualbudget/actual-server:25.9.0-alpine@sha256:7b0ee6d9ae34d4f7dab95cec5bdd968cfe8e919f31d7bb9db98e37023ed4f798",
      "name": "actual"
    },
    {
      "depends_on": [
        {
          "condition": "healthy",
          "name": "actual"
        }
      ],
      "env": {
        "ACTUAL_SERVER_URL": "http://localhost:80"
      },
      "healthchecks": [
        {
          "http": {
            "method": "get",
            "path": "/api-docs/",
            "port": 5007,
            "scheme": "http"
          },
          "kind": "liveness",
          "name": "http",
          "success_threshold": 2,
          "timeout": 2
        }
      ],
      "image": "docker.io/jhonderson/actual-http-api:25.9.0@sha256:af818827f202f67b1b5141ccad588ae3fcefc19a25f17401dd47e5e58f67cc4d",
      "name": "http-api",
      "secrets": [
        {
          "name": "server_password",
          "env_var": "ACTUAL_SERVER_PASSWORD"
        },
        {
          "name": "http_api_key",
          "env_var": "API_KEY"
        }
      ]
    }
  ],
  "guest": {
    "cpus": 1,
    "cpu_kind": "shared",
    "memory_mb": 256
  },
  "services": [
    {
      "autostart": true,
      "autostop": "stop",
      "checks": [
        {
          "interval": "10s",
          "grace_period": "10s",
          "method": "get",
          "path": "/health",
          "protocol": "http",
          "timeout": "2s",
          "type": "http"
        }
      ],
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 80,
      "ports": [
        {
          "port": 80,
          "handlers": [
            "http"
          ],
          "force_https": true
        },
        {
          "port": 443,
          "handlers": [
            "tls",
            "http"
          ],
          "tls_options": {
            "alpn": [
              "http/1.1"
            ],
            "versions": [
              "TLSv1.2",
              "TLSv1.3"
            ]
          }
        }
      ],
      "protocol": "tcp"
    },
    {
      "autostart": true,
      "autostop": "stop",
      "concurrency": {
        "type": "connections",
        "hard_limit": 100,
        "soft_limit": 75
      },
      "internal_port": 5007,
      "ports": [
        {
          "port": 5007,
          "handlers": [
            "tls",
            "http"
          ],
          "tls_options": {
            "alpn": [
              "http/1.1"
            ],
            "versions": [
              "TLSv1.2",
              "TLSv1.3"
            ]
          }
        }
      ],
      "protocol": "tcp"
    }
  ],
  "mounts": [
    {
      "name": "actual_data",
      "volume": "vol_re81qgedyq0",
      "path": "/data"
    }
  ]
}

the machine is successfully created, but the volume is not mounted as I expect:

Connecting to fdaa:a:7d38:a7b:88dc:8096:cfa7:2... complete
/ # mount
none on / type overlay (rw,relatime,lowerdir=/bundles/actual/.pilot-lower:/lower/dev/vdc,upperdir=/upper/dev/vdb/upper-actual,workdir=/upper/dev/vdb/workdir-actual,uuid=on)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run type tmpfs (rw,nosuid,size=65536k,mode=755)
/dev/root on /.pilot/swapon type squashfs (ro,relatime,errors=continue)
/dev/root on /.pilot/tini type squashfs (ro,relatime,errors=continue)
none on /.fly type tmpfs (rw,relatime)
/dev/vdb on /.fly-upper-layer type ext4 (rw,noatime,stripe=1024)
/dev/root on /bin/swapon type squashfs (ro,relatime,errors=continue)
/ # mount | grep data
/ # df
Filesystem           1K-blocks      Used Available Use% Mounted on
none                   8154588       132   7718644   0% /
tmpfs                    65536         0     65536   0% /dev
shm                      65536         0     65536   0% /dev/shm
tmpfs                    65536         0     65536   0% /run
/dev/root               147712    147712         0 100% /.pilot/swapon
/dev/root               147712    147712         0 100% /.pilot/tini
none                    106164         0    106164   0% /.fly
/dev/vdb               8154588       132   7718644   0% /.fly-upper-layer
/dev/root               147712    147712         0 100% /bin/swapo

I tried to create a minimal PoC showing several things:

  • Rate Limiting with a nginx sidecar
  • Cloudflare Origin Check
  • Share a directory on the host machine with two containers

Find the PoC here: GitHub - forge-42/fly-sidecar-poc: Prototype showcasing Multi-Container "Sidecar" deploys for fly.io with Cloudflare origin check and Rate limiting

Looking at your configuration @dwsr , it seems that your mounts are not in the correct location, and the volumes are missing. In my proof of concept, I didn’t use a persistent fly volume, but the syntax should be similar to what you want to achieve.