Long running background tasks

Is there a way to run a batched async tasks like ML inference or map reduce?

For example, once a day or when necessary, we would start a VM that collects data from s3, process that data, upload the results, and then shut down.

Ideally, there would be no cost during the down times and also no assigned IP since it doesn’t need to be publicly accessible. Placement of the VM could also be whichever region is least busy within some jurisdictional restrictions.

1 Like

This is internally possible, but not exposed to users yet. That’s how we run our remote builders.

Not setting any service on your app shouldn’t assign it an IP.

If you set all the regions you’d allow your app to run in, it would schedule it anywhere from that list whenever you start it.

The only thing missing is exposing the “ephemeral” kind of app to users. I can take a look at that. Let me get back to you.

1 Like

Out of curiosity, what limits would you need? We’ve been kicking around the idea of cheap workers where the sun is down.

1 Like

Thanks

Limits as in cpu or memory requirements? Since we’re doing the work in batches anyways, it shouldn’t matter if it takes a bit more time, but the work is fairly parallelized. So something like 4 cpus should be more than enough.

Oh, I meant “which jurisdictional restrictions”?

We don’t have anything specific for that, but it’s something that we’ve had requests for. Most of our customers are in the US, with some in in Europe and Asia.

1 Like

Any news about this?

Not yet.

We haven’t decided on the right API for this yet.

Hi, are there any updates in this space? I’d like to try port over a flask app that I’m running on gcp but I’m waiting on a supported way to run background tasks.

Would be super interested in this as well.

I don’t think there’s a scheduler built into Fly yet, but it should be possible to run an app that’s just a worker (doesn’t bind to any port), that’s scaled up to N and then down to 0 with an external cron job.

Nothing internal to Fly yet, though.