I’m trying to deploy a LiteLLM container on Fly.io, but I’m running into some strange issues. When LiteLLM attempts to listen on both IPv4 and IPv6, the app becomes unusable — all AI models fail to load, and the /models endpoint returns an empty list.
Has anyone here managed to successfully run LiteLLM on Fly.io? If not, I might stop trying and look for alternatives. Any tips or confirmation would be greatly appreciated.
I ran into an issue with LiteLLM: by default it auto-imports models from config.yaml, but if you override the startup command (for example, to make it listen on both IPv4 and IPv6), that auto-import no longer runs.
I’ve heard that the Enterprise version may provide an API endpoint to upload config.yaml, but I haven’t seen official documentation or a link confirming this. In my case, I tried a workaround by manually inserting the correct content into the models table in Postgres. It worked, but honestly it was more hassle than it’s worth, and I wouldn’t recommend it.
Once my project stabilizes and generates revenue, I’ll likely switch to the paid version. I also hope the LiteLLM team might consider making a stable config import mechanism available to free users, or providing another easier solution.