Upstash performance

Howdy!

I’m moving my stack from Heroku to Fly and am having a grand time. The only part that’s not workin’ out for me is the Upstash integration – the performance seems to be abysmal. The benchmark timing for the ‘starter‘ Upstash plan is so far off the mark there’s just no way I could ever use it in production.

I wrote a short benchmarking script to insert 1K small key-val pairs (~10 bytes) and 100 larger key-val pairs (~10KB), then measure retrieval timing.

Using Heroku Redis (Premium 1), my timings:

  "redis_get_mean": 0.70ms,
  "redis_get_p95": 0.76ms,
  "redis_large_get_mean": 0.74ms,
  "redis_large_get_p95": 0.80ms

Using Upstash (single region, same region as my app, LAX):

  "redis_get_mean": 14.1ms,
  "redis_get_p95": 19.9ms,
  "redis_large_get_mean": 16.5ms,
  "redis_large_get_p95": 25.2ms,

As a sanity check, I spun up a redis.io starter database and pointed the same script at it. Not as quick as Heroku, but bounds better than Upstash…

  "redis_get_mean": 1.37ms,
  "redis_get_p95": 2.61ms,
  "redis_large_get_mean": 1.32ms,
  "redis_large_get_p95": 1.90ms

I’d really like to use Upstash, but with the above numbers, my performance would tank.

Any thoughts? I’m all out of ideas.

Thanks :slight_smile:

Could you share a link to your benchmark script? Maybe that will tempt an employee to try it…

1 Like

Hey, can you send the benchmark details along with your database endpoint url to support@upstash.com so our team can also take a look?

1 Like

My benchmarking script. I’ve spun up a handful of different Upstash databases and they all perform similarly

import json
import os
import statistics
import time

import redis


def benchmark_operation(r, key_prefix, value, iterations, label):
    """Benchmark a specific redis operation."""
    for i in range(iterations):
        r.set(f"{key_prefix}{i}", value, ex=300)

    times = []
    for i in range(iterations):
        start = time.perf_counter()
        r.get(f"{key_prefix}{i}")
        times.append((time.perf_counter() - start) * 1000)

    # Cleanup
    for i in range(iterations):
        r.delete(f"{key_prefix}{i}")

    mean = statistics.mean(times)
    p95 = statistics.quantiles(times, n=20)[18]
    print(f"{label:20s} mean: {mean:6.2f}ms  p95: {p95:6.2f}ms")

    return {f"{key_prefix}mean": mean, f"{key_prefix}p95": p95}


def run_benchmark(iterations=1000):
    """Run Redis performance benchmark."""
    redis_url = os.environ.get("REDIS_URL")
    if not redis_url:
        raise ValueError("REDIS_URL environment variable not set")

    r = redis.from_url(redis_url)
    print(f"\nRedis Benchmark ({iterations} iterations)")
    print("=" * 60)

    results = {}
    results.update(benchmark_operation(r, "small_", "value", iterations, "Small values"))
    results.update(benchmark_operation(r, "large_", "x" * 10000, iterations // 10, "Large values (10KB)"))

    r.close()

    print("\nResults:")
    print(json.dumps(results, indent=2))
    return results


if __name__ == "__main__":
    run_benchmark()

That’s interesting, I’m using Upstash too, but I haven’t benchmarked it ever.

Where is your Upstash instance located? Could it be that it’s far away from fly servers.

Let us know of your findings.

Where is your Upstash instance located? Could it be that it’s far away from fly servers.

Both my app and my Upstash instance are in LAX.

1 Like

Bump. Still poking at this one. Upstash support has been helpful, but no matter what I do, I can’t seem to get < 10ms mean response timings from colocated Fly machines and Upstash databases.

I noticed that when on the “pay as you” Upstash plan, I’m getting single-digit ms responses, but the second I change to another plan, performance seems to degrade to what I’m currently seeing. Their team has informed me that there are no performance differences between plans, so I’m waiting to hear back.

On a lark, I stood up a self-hosted redis instance in LAX, alongside my app and am seeing <1ms timings. So puzzling.

Final update: Upstash engineers confirmed that there is a performance difference between the “pay as you go” plan and the others – the 100 requests-per-second limit.

There is a performance difference between the Pay As You Go and Starter plans, and it comes from how Upstash Redis enforces the requests-per-second (RPS) limit.

Upstash Redis enforces this by dividing each second into 10 ms buckets — so there are 100 buckets per second.

With 100 RPS, the effective limit is 1 command per 10 ms bucket.
So if you send 100 requests at once, many of them must wait until a bucket is available.

Which explains what I’m seeing (although maybe not why I periodically see 30ms+ response timings when very much under the 100rps limit…).

Not ideal, as my use case would result in ridiculous bills if I were on the ‘pay as you go’ plan. I was reeeeally hoping to not self-host, but here I come redis:latest :wink:

Final update v2: the team at Upstash has offered to personally assist with my requirements until they roll out some new plans which should hopefully work for my team out of the box. Great customer support on their side.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.