Managing Redis rate limits on Sidekiq and Rails

Hi, I’m having an issue rate limiting Redis in my Rails application using Sidekiq. At the moment I’ve just deployed it privately and it’s only me using it however after about 5 minutes I start getting innumerable ERR max daily request limit exceeded even when I have just deployed my application and have scarcely used it.

I can’t even imagine what it’s doing to hit the rate limits so quickly. I have this same application deployed in production on Heroku which I’m trying to move to Fly.io and have never seen it hitting rate limits also being on the basic free tiers. This is a fairly low-impact application and only uses Sidekiq for a few trivial tasks.

Here is my setup below.

Errors

2022-08-30T09:44:43.971 app[d734217a] ams [info] 2022-08-30T09:44:43.971Z pid=515 tid=46xn WARN: Redis::CommandError: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T09:44:43.971 app[d734217a] ams [info] 2022-08-30T09:44:43.971Z pid=515 tid=46xn WARN: /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:162:in `call'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:269:in `block in send_command'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:268:in `synchronize'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:268:in `send_command'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/commands/scripting.rb:46:in `script'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:51:in `zpopbyscore'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:34:in `block (2 levels) in enqueue_jobs'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:29:in `each'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:29:in `block in enqueue_jobs'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq.rb:164:in `block in redis'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:63:in `block (2 levels) in with'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:62:in `handle_interrupt'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:62:in `block in with'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:59:in `handle_interrupt'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:59:in `with'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq.rb:161:in `redis'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:28:in `enqueue_jobs'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:108:in `enqueue'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:100:in `block in start'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/component.rb:8:in `watchdog'

2022-08-30T09:44:43.971 app[d734217a] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/component.rb:17:in `block in safe_thread'

2022-08-30T09:44:19.681 app[d734217a] ams [info] 2022-08-30T09:44:19.681Z pid=515 tid=46x3 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T09:44:24.698 app[d734217a] ams [info] 2022-08-30T09:44:24.697Z pid=515 tid=46x3 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T09:44:30.249 app[d734217a] ams [info] 2022-08-30T09:44:30.249Z pid=515 tid=46xn WARN: Redis::CommandError: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T09:44:30.250 app[d734217a] ams [info] 2022-08-30T09:44:30.249Z pid=515 tid=46xn WARN: /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:162:in `call'

Sidekiq.yml

:verbose: false
:concurrency: 10

# Set timeout to 8 on Heroku, longer if you manage your own systems.
:timeout: 8

# Sidekiq will run this file through ERB when reading it so you can
# even put in dynamic logic, like a host-specific queue.
# http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/
:queues:
  - critical
  - default
  - <%= `hostname`.strip %>
  - mailers
  - low

# you can override concurrency based on environment
production:
  :concurrency: 2
staging:
  :concurrency: 15

I have my web processes defined in the Toml file

Toml

[processes]
  web = "bundle exec puma"
  worker = "bundle exec sidekiq"

[experimental]
  allowed_public_ports = []
  auto_rollback = true

[[services]]
  http_checks = []
  internal_port = 8080
  processes = ["web"]
  protocol = "tcp"
  script_checks = []
  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

Procfile

web: bin/rails server
worker: bundle exec sidekiq -c 2

I’m not sure where I should make changes. Should I do something in the Toml file to lower the concurrency? Any help would be greatly appreciated.

Sidekiq will run a few commands every second to check for new jobs. Given the concurrency levels you have set, it makes sense that you’d reach this limit quickly. If you’re not able to upgrade, you could run your own Redis instance as an app inside your organization.

These concurrency settings were there from default. If I set the concurrency to 1 will it change anything? All of the things we use Sidekiq for could be in the background for 5-10 minutes without any issue. How can I set it so it doesn’t run commands every second but maybe every 60 seconds?
Would that be slow enough to live on the free tier? How could I specify this?

We’ve been running the free tier on Heroku for at least a year without any issue using these settings so I was just surprised.

Thanks,

Reducing concurrency to 1 might help. Can you give it a try? Also, do you see anything in logs?

Yeah, setting it to 1 helped but looking thru the logs it appears to poll on the average of every 3-5 seconds. So given that the low tier has a cap of 10,000 commands per 24 hours I will still break that before the day is finished (even at 5 seconds this still 17,280 requests per day!)

Unless I can figure out how to slow it down to 20 or 30 seconds I’ll probably just switch to another service that has a higher cap. ;-(

2022-08-30T13:48:45.296 app[d31d32f0] ams [info] I, [2022-08-30T13:48:45.294632 #515] INFO -- : [461aaab8-285e-464a-af89-180c0753fe53] Started GET "/admin/sidekiq/stats" for 168.220.95.126 at 2022-08-30 13:48:45 +0000

2022-08-30T13:48:47.907 app[d31d32f0] ams [info] I, [2022-08-30T13:48:47.905742 #515] INFO -- : [cd18be0a-98b0-4995-96ae-b95e58d6e06d] Started GET "/admin/sidekiq/stats" for 168.220.95.126 at 2022-08-30 13:48:47 +0000

2022-08-30T13:49:00.405 app[d31d32f0] ams [info] I, [2022-08-30T13:49:00.404804 #515] INFO -- : [893b98ad-55ae-40d6-acfd-7d4efb2d8e82] Started GET "/admin/sidekiq/stats" for 168.220.95.126 at 2022-08-30 13:49:00 +0000

2022-08-30T13:49:02.396 app[d31d32f0] ams [info] I, [2022-08-30T13:49:02.394895 #515] INFO -- : [b5c2f20a-a759-4dfb-a1b2-6a77223ca0a7] Started GET "/admin/sidekiq/stats" for 168.220.95.126 at 2022-08-30 13:49:02 +0000

2022-08-30T13:49:05.648 app[d31d32f0] ams [info] I, [2022-08-30T13:49:05.646651 #515] INFO -- : [76eb9092-a9fa-413e-88f5-ec53a5c0f2c3] Started GET "/admin/sidekiq/stats" for 168.220.95.126 at 2022-08-30 13:49:05 +0000

I looked into this a bit more. Sidekiq blocks on new jobs, but will reconnect after a 2 second timeout when idle. So a single worker, in theory, would need a larger timeout to prevent hitting the limit. You could try something like this in a Rails initializer:

Sidekiq::BasicFetch::TIMEOUT = 15

But the logs you posted suggest another issue on top of this. Those are web requests going to what looks like the sidekiq web interface. Do you have a browser window open that has live polling enabled?

Oh cool. I will try that and let you know.

Ah, yeah, then you can ignore this then. I had it open and I don’t normally use the web interface unless I’m debugging some issue.

Thanks!

Watching the logs I think maybe that works.

I just added :timeout: 15 to the sidekiq.yml file.

I will know for sure tomorrow but I think that’s perfect.

Thanks again.

timeout in sidekiq.yml refers to the network timeout, not the timeout being referred to here, which is hardcoded.

1 Like

Ooops, ok I have instead set it in an initializer as you suggested.

Sidekiq.configure_server do
  Sidekiq::BasicFetch::TIMEOUT = 15
end

Hopefully that works.

Nope, even after just 2 hours it still starts failing.

2022-08-30T16:34:09.550 app[0ae8de40] ams [info] 2022-08-30T16:34:09.550Z pid=515 tid=47m7 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:14.573 app[0ae8de40] ams [info] 2022-08-30T16:34:14.573Z pid=515 tid=47m7 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:19.590 app[0ae8de40] ams [info] 2022-08-30T16:34:19.589Z pid=515 tid=47m7 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] 2022-08-30T16:34:22.005Z pid=515 tid=47of ERROR: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] 2022-08-30T16:34:22.005Z pid=515 tid=47of WARN: Redis::CommandError: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] 2022-08-30T16:34:22.005Z pid=515 tid=47of WARN: /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:162:in `call'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:269:in `block in send_command'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:268:in `synchronize'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis.rb:268:in `send_command'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/commands/scripting.rb:110:in `_eval'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/commands/scripting.rb:97:in `evalsha'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:54:in `zpopbyscore'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:34:in `block (2 levels) in enqueue_jobs'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:29:in `each'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:29:in `block in enqueue_jobs'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq.rb:164:in `block in redis'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:63:in `block (2 levels) in with'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:62:in `handle_interrupt'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:62:in `block in with'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:59:in `handle_interrupt'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/connection_pool-2.2.5/lib/connection_pool.rb:59:in `with'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq.rb:161:in `redis'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:28:in `enqueue_jobs'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:108:in `enqueue'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/scheduled.rb:100:in `block in start'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/component.rb:8:in `watchdog'

2022-08-30T16:34:22.008 app[0ae8de40] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sidekiq-6.5.4/lib/sidekiq/component.rb:17:in `block in safe_thread'

2022-08-30T16:34:22.010 app[0ae8de40] ams [info] I, [2022-08-30T16:34:22.009591 #515] INFO -- sentry: [Transport] Sending envelope with items [event] 54f31a8bae4b444a8a4adc75eba9a124 to Sentry

2022-08-30T16:34:24.610 app[0ae8de40] ams [info] 2022-08-30T16:34:24.610Z pid=515 tid=47m7 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T16:34:29.629 app[0ae8de40] ams [info] 2022-08-30T16:34:29.629Z pid=515 tid=47m7 ERROR: heartbeat: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

Good news - for the time being, free plans are now capped at 100MB bandwidth instead of 10k commands. Can you try again with these hacks removed?

Wow, that’d be perfect.

I have removed these hacks but as of yet I still receive the same errors:

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] 2022-08-30T17:32:50.151Z pid=515 tid=53n WARN: Redis::CommandError: ERR max daily request limit exceeded. Limit: 10000, Usage: 10000. See https://docs.upstash.com/overall/databasetypes for details

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] 2022-08-30T17:32:50.151Z pid=515 tid=53n WARN: /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:162:in `call'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:119:in `block in connect'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:344:in `with_reconnect'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:114:in `connect'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:417:in `ensure_connected'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:269:in `block in process'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/redis-4.7.1/lib/redis/client.rb:356:in `logging'

2022-08-30T17:32:50.151 app[3cbe3b49] ams [info] /app/vendor/bundle/ruby/3.1.0/gems/sentry-ruby-5.4.1/lib/sentry/redis.rb:78:in `block in logging'

Perhaps I should delete and create a new Redis instance? But I am hopeful. :wink:

Can you try again now? Looks like your DB just synced.

Yes! It seems all good now. I’ll let you know if something changes but it seems perfect.

Thanks again!

Update, I haven’t made any changes but now I constantly get these errors. I guess I’ll try reducing concurrency back down to 1.

2022-09-02T10:17:20.487 app[dd142973] ams [info] 2022-09-02T10:17:20.487Z pid=513 tid=47mp WARN: Your Redis network connection is performing extremely poorly.

2022-09-02T10:17:20.487 app[dd142973] ams [info] Last RTT readings were [286498, 287021, 286616, 286685, 286494], ideally these should be < 1000.

2022-09-02T10:17:20.487 app[dd142973] ams [info] Ensure Redis is running in the same AZ or datacenter as Sidekiq.

2022-09-02T10:17:20.487 app[dd142973] ams [info] If these values are close to 100,000, that means your Sidekiq process may be

2022-09-02T10:17:20.487 app[dd142973] ams [info] CPU-saturated; reduce your concurrency and/or see https://github.com/mperham/sidekiq/discussions/5039

Did you ever find the source of the issue? Im also using sidekiq and upstash redis, and my use case is also fairly simple (run 1 job every 5 minutes).

What I’ve tried is running sidekiq with concurrency 1 and also tried changing redis’s polling time to 30 seconds:

Sidekiq.configure_server do |config|
  config.average_scheduled_poll_interval = 30
end

Unfortunately neither worked

So just an update. I successfully reduced redis usage by switching to resque and resque scheduler which provided more ability to control how often redis was hit (via INTERVAL and RESQUE_SCHEDULER_INTERVAL env vars)

However, I still hit the 10k command limit for upstash free tier. I’ve concluded that even basic background processing in Rails to not be conducive to this free tier and have switched to Redis Cloud free tier instead which does not have a limit of # of commands but rather memory.