Get help with Fly GPUs!

Fly GPUs are available for everyone to try. As you start using them, we want to make sure we’re here to answer questions and help you through deploying your first GPU-enabled Fly apps.

Here’s a few questions other Fly devs have had:

GPU access and org trust

Starting with GPU access and security! The most common error some of you may encounter is this message when you try to deploy:

Your organization does not have enough trust to create GPU machines. Please contact []( to remove the restriction.

This means exactly what it says: our fraud algorithm is unsure about your account. We’re a little more picky about who has access to GPUs than some of our other stuff. If you’re an existing Fly user with a history, we’ll look at your account and remove it. Otherwise we might ask you to pre-auth your card.

Perfomance CPUs and Midjourney Bot

One of you reported an issue (thanks!!) with the (Not)Midjourney Bot blog post where a non-GPU-enabled instance was being created from the fly.toml in the sample.

This is because the original post specified a shared cpu in the [[vm]] section, but GPUs now require performance CPUs (they just work a lot better). We’ve since fixed the sample to use performance-8x CPUs, so if you hit this issue, go try again!

What else have you come across? We’re here to help.


The request, not specific to GPUs, but especially for running expensive machines like GPUs, is that it would be easier to estimate costs if you could configure the time interval between the fly proxy’s automatic shutdown process.

1 Like

To add, it doesn’t have to be immediately configurable, but it would be nice if the documentation was clear.

I got an error message looks little different.

failed to provision seed volumes: failed creating volume: failed to create volume: To create volumes on GPU hosts please contact (Request ID: xxxxx)

I’ve sent emails to but still not get any responses.

When you need control over Machine idle shutdown, we recommend using the fact that Machines stop when the main process exits.

Thanks for the suggestion; for now I’m adding the above link to the GPU quickstart doc.

Can I ask what size volume you’re trying to create? There are some org limits on this. Once we figure this out I’ll go back and make that error message clearer too.

I followed this Blog article.
Scaling Large Language Models to zero with Ollama · The Fly Blog

Here’s my fly.toml

app = 'hellegpu'
primary_region = 'ord'

  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']

  memory = '16gb'
  cpu_kind = 'performance'
  cpus = 8
  gpus = 1
  size = 'a100-40gb'

  image = "ollama/ollama"

  source = "models"
  destination = "/root/.ollama"
  initial_size = "100gb"

Oh I see the issue. Your org was flagged by our trust algorithm. I fixed that and made the error message better. Go ahead and try again.

1 Like

hey @nina , got the same error message and still haven’t gotten a response from I’ve been using for a while now. How can I get the restriction removed from my account?

You should be good now!

Thank Nina. Love to great customer support from

It’s working now. Thanks!

I am also having the same issue trying to deploy an app with GPU. Can that restriction be removed? I emailed Thanks.

You should be all set.

1 Like

Awesome, thanks!

I am having the same issue and also sent an email to I’m relatively new to so I’m pretty sure that’s why my account got flagged. Could you kindly lift the restriction for me as well?


Same issue with me please can I have restriction removed from the GPU? Thanks!