My understanding is, if I use machines in a similar way I’ll end up with a lot of idle machines accrued over time.
A single instance of a machine app (one that’s stopped
but not destroyed
) can continue to serve multiple requests over its lifetime, even concurrently if required (see soft_limit
/ hard_limit
in concurrency
).
I like to think of machine apps as Amazon API Gateway + AWS Lambda.
My workflow is: After creating a Fly machine app, I flyctl deploy
the docker image (to which ever regions), and then send http
, tls
, or tcp
requests to it (unsure if machines support udp
; edit: udp
should work), after having allocated it v4
/ v6
public ips. I am not really sure if machines work with Flycast IPs (it’d be cool, if they did).
When a machine app isn’t serving any requests, it is free to stop
itself (or can be stopped with flyctl m stop <machine-id>
. Fly guarantees that stopped
machines will be woken up whenever a request comes in (typical cold starts: 300ms
).
This Fly blog post has a nice rundown of deploying a machine with flyctl
(which is what I prefer using over Fly’s GraphQL / REST API).
if I use machines in a similar way I’ll end up with a lot of idle machines accrued over time. Is that correct?
If I am not wrong, if one keeps destroying but also recreating more machines, I don’t think they’ll accrue a collection of them. Destroyed machines are relegated to the dustbins of history, never to be seen again (except for in that month’s invoice).
My understanding of machine apps is just from fiddling with it for the past few hours, so that’s one disclaimer. And for the next one: I do not know anything about Heroku, so I can’t really answer your question of what’s the Fly equivalent of heroku run
.