Hey everyone,
I’m exploring options for deploying AI models at the edge and was wondering if Fly.io is a good choice for this use case. Given its focus on global app deployment and low-latency performance, it seems like a potential fit for AI computer inference workloads.
A few questions I have:
- Does Fly.io provide GPU support for AI inference?
- How well does it handle scaling AI workloads across different regions?
- Are there any performance limitations when running AI models on Fly.io?
I would love to hear from anyone who has tried deploying AI workloads on Fly.io! Any insights or alternatives are also welcome.
Thanks!