Self Hosted Redis on Fly.io

Upstash’s pricing is not compatible with Bull queues and I have some dev environments I am not worried about having a managed service for. I am trying to self host a Redis instance (I had this working at some point in the past but I cannot figure out whats going on now).

Here is the toml file I am using

app = "trellis-bull-redis-develop"
primary_region = "yyz"

[mounts]
destination = "/data"
source = "bull_redis_server_develop"

[metrics]
port = 9091
path = "/metrics"

[build]
image = "flyio/redis:6.2.6"

[[vm]]
cpu_kind = "shared"
cpus = 1
memory_mb = 256

[[services]]
internal_port = 6379
protocol      = "tcp"

Which has been stitched together from:

  1. Redis - standalone Redis Server · Fly Docs
  2. GitHub - fly-apps/redis: Launch a Redis server on Fly

The Redis server starts up fine from what I am can tell, and I have a private IPv6 assigned to it so I should be able to connect to it using the flycast address.

I am trying to connect to it using the following configurations:

REDIS_HOST=trellis-bull-redis-develop.fly.dev (also tried .flycast here instead of .fly.dev)
REDIS_USERNAME=default
REDIS_PASSWORD=<the password I set as a secret>
REDIS_PORT=6379
REDIS_FAMILY=6

But I am getting connection reset errors:
error: { errno: -104, code: 'ECONNRESET', syscall: 'read' },

I was getting a different error before I added the family 6 part (which makes sense as its via IPv6).

This is the same env variable structure that worked with ElasticCache, DigitalOcean Redis and Upstash Redis, but doesn’t seem to work here.

I tried to enable TLS which also didn’t work (but I don’t believe that TLS should be enabled as the fly redis docker image doesn’t have it configured).

Not really sure what else to check, I THINK it should be available and open to connections but it either isn’t and the “connection reset” is a red herring or something is legitimately resetting the connecting.

Resolved!

No need for the IPv6 so I removed that and changed the host to .internal instead of .fly.dev/.flycast

1 Like

@yharaskrik What was your experience with BullMQ? From our docs:

Note that empty responses are not counted towards your bill. This prevents billing for standard polling behavior of tools like Sidekiq or BullMQ.

It would be helpful to know if you’re not seeing this behavior in practice when using Upstash.

Hey @joshua-fly, we are using Bull not BullMQ (not sure if BullMQ has gotten better with the whole empty response thing or not) but I was still seeing a large amount of commands being charged. There was a thread somewhere (I think on a github issue?) that was talking about how yes the majority of commands are empty responses and not counted correctly but somewhere around 25% are not (25% while idle, of course commands that have a body are counted as normal).

What I was seeing was even while sitting idle our servers that connected to the upstash redis absolutely demolished command usage. The free tier of 10k was used up in something like tens of minutes if not faster. obviously i don’t expect to be able to use the free tier in prod but i extrapolated it out and it was going to be super expensive.

My plan was to use a small single instance redis for our bull queues and upstash for all the critical caching stuff.

Thanks for the info. How many servers did you have connected at the time?

Of course! Happy to provide any context I can.

At the time we only had 1 server (that was polling our bull queues) the other 3 only ever added to the queues (and there was no traffic to these servers at the time cause I was in the process of moving our dev environment from AWS).

So unless it was something else, but to try and debug it I added the DEBUG=ioredis:* variable and saw that bull was constantly making requests so figured it was that, just don’t know if they were “empty” or not.

Edited, Feb 1, 1:26 as I inadvertently lied in the original post. This actually does include the bull queues (turns out they were configured with the same connection strings as the other redis configurations in our servers). So the difference makes more sense for sure. I will fix this issue and then see where the command usage is once the bull queues are off upstash. We found this because our queues started acting funny once they were connected to upstash instead of the deployed redis container. (Original post left as is since its still kind of relevant and good data to have)

Hey @joshua-fly just an update on this, I switched our (non bull queue) redis connections over to using upstash to test it out. Heres some stats for you (maybe its just the way we are using it, which very well could be on us, but the pricing model doesn’t really make sense)

This is for our dev environment (which has VERY low traffic, as theres only 1 or 2 people using it at a time whereas production will have thousands especially in high traffic times on real-time areas like the auction/raffle systems) and across about a ~51minute window:

Commands: 117K
Writes: 30,739
Reads: 86,407
Billable: 35,372
Empty: 81,774

Estimating about 5cents total cost so far (according to upstash).

43,800 min/mo / 51 min = 859 * 0.05 = $43.

That is only for the dev environment with minimal traffic, production will be MUCH higher (and we have a staging environment too). We were running single node (on dev and staging) and multi node on production on ElasticCache on AWS prior on the smallest tiers and they were totally fine (performance wise) never had any issues.

Our whole AWS bill for redis (across all 3 envs) was $63 before. Based on what I am seeing here (extrapolating of course) our Upstash bill would be much much higher. Also note: all of this is assuming we are still running a separate redis just for our Bull queues. Whereas that AWS cost was Bull queues + Standard Redis caching.

I can spin up (and have) an DigitalOcean Redis instance for significantly less than what Upstash will charge me that will serve our needs perfectly fine. There are of course drawbacks to that as it needs to be exposed publicly, or I run our own Redis instance which has its own drawbacks of course.

Of course I don’t expect Fly or Upstash to change their pricing model because of this or anything, but hopefully this can provide some more context as i would love to have everything within our Fly WireGuard network naturally.

I have yet to switch our production environments over to what I have deployed on Fly as I have been working to try and figure out what I am going to do with Redis (and I am hoping for private supabase postgres!) but would like to soon.

Hopefully this feedback is helpful!

Adding a +1.

Experienced the same with a newly hosted bullmq + upstash redis configuration.

  • 1.4M commands issued in 7h.
  • Default settings.
  • Service does about 0.5rps, all jobs complete < 1s.

The pricing model seems incompatible with bull/bullmq. Discussed here as well. Would love to have everything in wireguard ideally!

Moved to railway redis for now.

I saw recently that upstash released a Queues feature. Was thinking I might try and hack that into Bull somehow. But haven’t had the time.

1 Like

The Upstash team is working on a solution for this. They’ll post as soon as it’s ready.

3 Likes

:partying_face:

Wondering whether this has any updates?

I am also going to switch over to self hosted today, as my BullMQ for a dev server with no usage, just polling was at 3M requests/day (17 queues, 3 dev micro apps)

Still hunting down the solution to the polling being so high but in the meantime it’s still more cost effective to self-host.

I’m still having a conversation about the topic here on github

1 Like

On this as well, I want to retain the IPV6… so I manually assigned an IPV6 address, and used that address directly instead of the flycast / .internal urls as i’ve found that usually resolves any issues.

It seems to know it exists now but currently all connections timeout:

Error: connect ETIMEDOUT

Did you hit the same issue at some point @yharaskrik?

following as well on this issue. My upstash bill is crazy for just 2 simple queues.

Now Upstash Redis offers a $10/mo fixed price plan for this use case. Here’s the post about it: Upstash for Redis: New $10/mo, single region fixed price plan

Let us know if this works for you!

10$ still too much for some simple pet project running Sidekiq and constant timeouts/connection reset errors. I’d rather pay those 10$ to Fly instead of Upstash.

Reverted to the self-hosting option.

2 Likes

Hi, I can’t get the system to work between a nest app and a redis app both hosted on fly io, i’ve tried both flycast and internal. I’ve allocated a private ip6 adress to the redis app, and basically I’m getting:

2024-10-12T12:25:40.950 app[287175df06e218] lhr [info] 323 2024-10-12 12:25:40.942 ERROR [ExceptionsHandler] Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details
# fly.toml app configuration file generated for fluent-redis on 2024-10-12T13:54:52+02:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#

app = 'fluent-redis'
primary_region = 'lhr'

[build]
  image = 'flyio/redis:6.2.6'

[[mounts]]
  source = 'redis_server'
  destination = '/data'

[[services]]
  protocol = 'tcp'
  internal_port = 6379
  ports = []

[[vm]]
  size = 'shared-cpu-1x'

[[metrics]]
  port = 9091
  path = '/metrics'

I’m putting the following url in my main app:

redis://default:****@fluent-redis.internal:6379?family=6 as REDIS_URL

And I get these logs in my app on startup:

2024-10-12T12:22:30.767 app[d8dd673fed7d38] lhr [info] [fluent-backend] 323 2024-10-12 12:22:30.767 LOG [RedisModule] default: the connection was successfully established +0ms

From the import { RedisModule } from "@liaoliaots/nestjs-redis";

RedisModule.forRoot({
			readyLog: true,
			config: {
				url: process.env.REDIS_URL,
			},
		}),
		```
		
But when actually trying to queue something with bull, I get the 

```ts
2024-10-12T12:25:40.950 app[287175df06e218] lhr [info] [fluent-backend] 323 2024-10-12 12:25:40.942 ERROR [ExceptionsHandler] Reached the max retries per request limit (which is 20). Refer to "maxRetriesPerRequest" option for details

I’m trying to access it from my backend through redis://default:password@fluent-redis.flycast:6379?family=6

I get this at my backend startup though… but no way to actually get the call to .add to my queue to not hang and give this max retry error

	2024-10-13 13:27:46.193	
[fluent-backend] 323 2024-10-13 11:27:46.193     LOG [RedisModule] default: the connection was successfully established +0ms

Everything works in my local dev environement.

Thanks in advance for your help, I’ve tried asking in the discord, and on the ERROR [ExceptionsHandler] Reached the max retries per request limit (which is 20). · Issue #22 · fly-apps/redis · GitHub repo, but can’t quite get it to work…