Heh, no doubt. To make something like this, you have to be since no one remembers how to do anything with bare metal anymore!
Excellent. Scaling up a hardware business takes years unless you’re going to do massive overcommit shenanigans in which case it’s probably going to fail anyway.
I generally think “the cloud” is generally terrible, useless, and for suckers, but I’m excited to try this out. An observation regarding latency for elixir liveview applications, currently, I have a small app deployed in Europe since I have 40% north american traffic and 30% india. A liveview update from west coast US to Europe is taking around 550ms. On localhost, it’s about 250ms (which requires some optimization to get to 100ms, but that’s neither here nor there). After doing some ping tests, it seems the min-latency is 5ms from various locations in Europe and NA and max will be 30ms for India. This could translate to a 250-300ms improvement in redraw times. To compare, if I do all the performance optimizations I can, I might eek out an extra 150ms on the redraw and a reverse-proxy probably would probably save 100ms.
I’m not sure if people truly understand what this means, but the difference between a 0.5 second redraw(not annoying) and a 0.25 second (bordering on instantaneous and acting like a desktop application) redraw would be an absurd improvement in user experience. Utterly absurd…if the cpu performance checks out.
Which brings me to my next question regarding sizing, I’m having trouble getting my head around what to expect coming from cpu pricing. The fact that providers have completely different performance is another matter, but I don’t understand what rough equivalent I’m looking for.
For instance, $5/month usually gets you 1 shared cpu and 1 GB of ram, which matches up with shared-cpu-1x. Then $10 / month - 1 shared cpu / 2 GB also makes sense.
After that, dedicated-cpu-1x / 2GB jumps to $31. I would imagine that is 2 vCPU - 1 core and 1 hyperthread. With 4GB, that is $40. That “looks” like the equivalent of a cpu-optimized droplet on DigitalOcean. Or maybe the dedicated cpu plans on Linode. And no one can compete with BuyVM on price/performance, so that’s a bad example.
Maybe I’m misunderstanding, but it seems that you’re missing shared-cpu-2x, shared-cpu-Nx? I know it’s not your value-prop, but do you have any rough comparisons/benchmarks? It would make transitioning easier.