feature request: set CAP_NET_ADMIN

Hello, I have an application with an internal backend that I would like to have listen on port 80. Right now, the only way to accomplish this is to run the container as the root user. I’d like to run as a user with read-only permissions while still binding to port 80.

Alternatively, are there any plans to support internal, non-public services with the service router?

Right now, the only way to accomplish this is to run the container as the root user.
I’d like to run as a user with read-only permissions while still binding to port 80.

Your container is actually running in a VM of its own, so you could go ahead and run whatever permissions set up you would otherwise run on a normal VM to make this happen — it should work fine.

Alternatively, are there any plans to support internal, non-public services with the service router?

Any service that listens inside the container VM without a corresponding [[services.ports]] section in your fly.toml is ready an internal, non-public service, accessible inside your network on app.internal:<port>. You can run any / many internal services this way, and you’ll can explicitly expose the ones you want to the internet using the fly.toml configuration.

Right, I’d just rather have this be an option in the toml config, vs manually having to set caps & drop permissions in the container entry point. Something analogous to docker’s --cap-add NET_BIND_SERVICE would be nice, seems like /fly/init could do the capset syscall when setting up the child process.

Cool, does this loadbalance across containers/vms and support autoscaling?

2 Likes

Cool, does this loadbalance across containers/vms and support autoscaling?

Not right now — the .internal is implemented as a DNS call, so it doesn’t go through the proxy layer and isn’t measured for autoscaling or load balancing.

That’s a good idea. We’re planning on adding more of these kinds of features.

Curious if anyone has a sample of how to use the entrypoint workaround for this. (Looking to deploy a fly podman daemon but running in to this)