Advice on a worker architecture using the lower level machine API

Hi!

I was wondering the best way to go about solving this need:

I have some work that can be splitted in N batches.

I want to be able to dynamically run N machines to process those batches concurrently.

My client usage patterns are not stable in the day (clients can submits batches at any time), so I would preferably like to scale to 0.

However once a order is sent, the batches needs to be processed as fast as possible (I can tolerate 10-15s startup, not much more).

What would be the recommendation:

#1) over provision X machines (X >>N) and stop them all, start N machines per batches, stop afterward (so I pay the init cost once and get fast batch processing / drawback is that I need to write provisioning logic to start more machines if I am at capacity

#2) start N machines dynamically: eat the init cost but don’t need to overprovision. stop / delete after processign

#3) Use an always on service + min_machines+ auto_start and use HTTP requests to coordinate my batches orders ?

I might be missing something obvious, thanks for the help!

T.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.