how are app vms (re)used

I’ve noticed that one app VM might get reused to service several requests sequentially. Will one VM ever be used to service multiple requests concurrently? If so, is there a way to prevent this?

In my scenario there may unavoidable global state (e.g., environment vars, files, node modules, …) that needs to be different for each request. If one VM handles multiple requests serially then I may have a chance to clean up between requests. If they are handled concurrently then I don’t see a way to isolate them.

Seems like Machines are the real answer here but I have yet to be able to get the simple Node example working with Machines (I get it running but then can’t talk to it as per Tease us with more "machine" info? - #8 by jeffmcaffer). In the meantime, I’m looking to see if Apps can at least enable some prototyping.

I would think the default would be for a vm to serve requests concurrently (since for most apps that would be the desired behaviour, to reduce the number of vms needed).

But to prevent that, I would guess you could set both these concurrency soft/hard limit values to 1 in fly.toml. Maybe give that a try:

There will be a (brief) delay on starting a new vm when request number 2 arrives so you’d have to decide on the number of vms you want running 24/7 vs any delay due to autoscaling.

1 Like

Thanks @greg. I optimistically confused that doc with the way AWS Lambda works where they run requests sequentially on a given VM. Since I’m running arbitrary user code I need to have a fresh environment for each execution. May just have to wait for Machines…

1 Like

Machines will be the real answer here, they’re just not finished.

App VMs could potentially be used this way. You can set a hard_limit of 1 in the fly.toml for the service. Then exit the VM when the request finishes. It’s a little hacky, but that’s close to what you’ll be able to do with machines.

2 Likes

Thanks @kurt. Just to clarify, when you say “exit the VM” you mean exit the process that was defined as the entry point for the app (e.g., process.exit(0) in index.js for a Node app)? Or is there a flyctl command that needs to be run?

2 Likes

Yep, exactly! Just exit 0.

1 Like

OK that worked. Thanks. It seems to take quite a while for the app to come back up after a process exit so while the first request completes in 100ms, a second, fired at the same time as the first, takes ~14sec to complete. Watching the logs it looks like the VM is up and running but waits quite a while for the health checks. I dropped down the numbers in services.tcp_checks but it didn’t get much better.

I saw mention of max-per-region in the TOML doc so added the following. It seemed to help two concurrent requests the first time but there after it was about 20sec cycle time.

[deploy]
  max-per-region = 2

Rather than chase this down, I’d happily move to Machines if I could get an example of a Node based machine that is built by the Fly build setup and can receive requests (see Tease us with more “machine” info? - General - Fly.io. It feels soooo close.

1 Like

If you run fly vm status <id> you should see an events timeline.

Running a few is a good idea.

Machines are very not ready for anything yet, I’m sorry. They’re available to poke at but it’ll be a few weeks before I’d tell you to use them for what you’re trying to do now.

1 Like