Upstash - Redis not reachable sometimes


I have a fly app that connects to an upstash redis (200M plan) multi-region (iad, ams) that works perfectly for some while and then all the sudden it doesn’t connect anymore, I’m using ioredis with family: 6 option, the internal ip6. Worth mention is that the staging has its own isolated environment and separate upstash instance (free plan multi-region) and the connection issue happens at the same time even do it’s two separate apps.

Are there any known issues with upstash? I’m out of ideas on my side.


Hey, can you post more details about the exceptions you see? Long-lived idle connections will time out after some time, but active connections should not, and new connections should be made without issue.

Thanks for your response!

So I don’t have any logs on the application level that I can see, but when I deploy the application again it starts working again for some while. So it feels like you say something to do with long-lived connections etc.

I just added to ioredes db: 0 and currently deploying again. otherwise, I don’t have much to go on unfortunately.

Update: no effect with db: 0

I have now connected to the VM and run redis-cli against the upstash URL, while the app is still unreachable the VM can connect successfully, the main difference is that I use the upstash URL and not the internal ipv6 address. Could this be a proxy issue? I can see that there are other similar issues here: upstash redis timeouts - #35 by jsierles

OK - how can you tell the app is not connecting to Redis if there are no errors logged? Also, which one (CLI or app) is using the hostname versus IP address?

I now have this error logged Error: Socket closed unexpectedly I will try some of the things from here and see where it goes: SocketClosedUnexpectedlyError: Socket closed unexpectedly · Issue #2032 · redis/node-redis · GitHub. While the node app gets this error I can at the same time connect thru the VM → upstash (* URL.

OK, yeah. So that error is different from what you might see if making the connection failed. That error means an existing connection was closed by our proxy because your client was not sending or receiving data.

The issue you linked suggests that node-redis may not handle disconnects too well, though the latest version has some fixes in place. Which version are you on?

To reduce the chances you’ll see a disconnect, you can use pingInterval as documented here: node-redis/ at master · redis/node-redis · GitHub.

Our proxy should time out idle connections after 1 hour, but connections may be severed when our proxy gets deployed as well. Clients should be able to handle this, generally, and reconnect as needed.

I can confirm this works now, I have the latest version of node-redis:

 this.cache = createClient({
      url: process.env.REDIS_URL,
      pingInterval: 4 * 60 * 1000,
      socket: {
        family: 6,

    this.cache.on('error', err => this.logger.error(err, 'Redis error'));
    this.cache.on('connect', () => this.logger.log('Redis is connect'));
    this.cache.on('reconnecting', () => this.logger.log('Redis is reconnecting'));
    this.cache.on('ready', () => this.logger.log('Redis is ready'));

    await this.cache.connect();

Also, I use the * URL, is it recommended to use the ipv6 internal instead? (I changed it when debugging) and just want to know if I should move back to that.

I think this should be documented somewhere when using node-redis but also ioredis because I had the same problem with both clients.

Using the DNS entry should work fine. What did you do with ioredis to fix your issue?

I changed to use node-redis I haven’t investigated for ioredis yet.