Livekit deploy

Has anyone been able to deploy a livekit instance on fly? Been hacking around with the configs but haven’t been able to get anything going (WebRTC noob). Appreciate it!

I wrote up a quick start guide for it: GitHub - bekriebel/livekit-flydotio: An example on how to deploy LiveKit on

This only handles a single instance. It’s possible to run a cluster, but it takes a few more steps and isn’t as well suited to Fly due to the need to have an adressible IP per node.

1 Like

We’ll be adding the ability soon to allocate one IP per region via the CLI. This should allow running a cluster with one VM per region.

1 Like

Excellent! Will that cover all regions? I experimented with that a bit using the graphql api, but a couple of the regions that I wanted weren’t available for regional IPs yet.

From previous information, it also sounded like the regional IPs would potentially fall back to other nodes based on load/availability. That isn’t the ideal decision in the case of something like LiveKit.

We should have them in all regions real-soon-now.

Regional IPs will fall back to other regions if the app has VMs in other regions. For what you’re doing, it’s best to create one distinct app per region so you get some better control.

That’s what I do currently. The downside to this is that in order to get a single anycast IP to handle the geo routing for me, I then also have to run additional app running a reverse proxy in each of those regions. I then pair each each proxy 1:1 to the corresponding livekit instance in the matching region. I also have to make a modification to LiveKit to handle regional room assignments, but that’s not a Fly-specific issue.

Is there a better way to do this that you can think of? I’m assuming there’s no way to have an additional global IP that would route to the individual apps based on their region. It’s a unique use case and stretches things a bit to take advantage of the anycast IPs but still have individual “region” IPs.

I don’t think there’s a better way, but that’s pretty brutal. I’m hoping to get to a better model for this kind of app (there are a lot!). I don’t know when exactly we’ll be able to do this, but we’re hoping to give VMs individual IPv4 addresses … once we buy a nauseating amount of IPs.

If I understand you right, that might actually help? It seems like you have to run the proxy because you can’t individually address each VM, if you had an IP that was pinned to a specific VM you wouldn’t need to run your proxy?

Correct, that would help. The other option I can think of would be to use TURN and have the global IP go to to the closest node, and then have TURN proxy the traffic to the room’s assigned node over ipv6. (This is assuming an ipv4 client). However, LiveKit (and Ion/Pion) don’t have fully implemented ipv6 support yet, so that doesn’t work. I’m also not sure how much overhead and latency it would cause to have TURN relaying this traffic over the ipv6 network.

My current layout looks something like this… It works, but it’s a bit hacky and has more points of failure than I like.


The quick version of how this works is that a client connects via the proxy app IP and gets routed to the closest proxy server. Each proxy server knows what region it is in and maps to a specific livekit server in an app dedicated to a specific region.

If it’s a new room, that livekit server spins up the room on its node and the signaling server hands the client back the connection info for that node (the IP address of that app).

If it’s an existing room, it sends back the connection info for the node the room is already running on (the IP address of that app).

I need the proxy so I can still get geo based routing for the creation of new rooms, but I need individual IPs per node because that’s how livekit tells the client to connect to the node that the room is assigned to.

Livekit may add region aware room creation of it’s own at some point, but I still need to be able to have a way to tell a client which specific node to connect to. And since this is UDP traffic, I can’t do something like custom HTTP headers to tell the Fly edge servers how to handle it.

1 Like

Cool that you got this working :slight_smile:

Perhaps you could remove the proxy if the Fly router returned a Fly-Region header. Then, you could make the room creation request, and extract the region from the response headers. If you kept a map of regions to IPs in your client, would that work?

The proxy is still needed to get a global IP that I can then load balance to individual regional apps. Regional IPs will partially fix this need, but I still worry about the edge case of the regional IP going to a different region based on load when connecting to an already existing room. Per-region apps is the best way to guard against this case at the moment, but then I don’t have one IP that points to them all without the proxy app.

I could also use external DNS that returns regional answers, but then thats bypassing one of the key reasons for using Fly in the first place :slightly_smiling_face:

As for storing info client side, I would rather avoid that. It then means that only a custom client can use my cluster, and would either require an update for each client for any changes or yet another service to pass that info back to the client.