Is there a good way to load test shared CPU’s?

I have an extremely light go server that essentially either calls redis or calls an external rest service to generate a response. I think that because it’s so light, I’ll be able to set the “concurrent users” to higher than the 20.

Of course I want to test this… without taking anyone else down. Do I need to worry if I’m throwing 50-100 concurrent users at an instance with a shared CPU?

No need to worry! Go for it.

For reference, shared CPUs use Linux’s “completely fair schedule” to allocate CPU time:

The micro-1x instances get 1/10th priority, and micro-2x get 1/5th priority under load. But if you’re running on a CPU without much demand (which is pretty common) you can burst up to a full CPU. Which really just means your benchmarks may look really nice. :slight_smile:

1 Like

Interesting, thank you!