ddos / postgres refusing to deploy

Hi there! I’ve been getting hit pretty much all day and when I tried to enable scaling for my managed postgres i’ve been stuck with a critical database on the health checks despite me changing the deployment twice now, what do I need to do to get it to deploy correctly? kindve urgent, thanks!

in terms of ddos, I route directly through you guys, using namecheaps DNS, when I read over your security it does say you guys handle ddos attacks, however from what I’ve seen the same endpoint that was getting hit never got filtered, is layer 7 handled here?

Taking a look at the traffic to your apps, I’m not seeing the kind of volume that I’d expect from most DDoSes. What kinds of activity are you seeing from this attack?

With respect to your database, autoscaling isn’t supported for our postgres clusters. Would you mind sharing a little more context on what you were trying to do with the cluster, and how you did it?

I first scaled the count to 5 the deployment was frozen so I went ahead and restarted using fly restart.

After that I was receiving critical health checks and still no deployment.

Afterwards I was tried to turn in auto scale which it allowed me to do.

This only resolved itself after I scaled to 0 and back to 5. However I still only have 2 instances shown.

In terms of the ddos it was just only a 10mb attack however my postgres connections basically closed everything down and I was struggling to find a way to scale it.

I was getting hit with this lower ddos but it was definitely over thousands of reqs a second. After scaling my service to 20+ vms postgres couldn’t handle the connection amount of either. Which is the reason I’m trying to figure out how to scale it horizontally. Thanks for the help

If you check Prometheus for the past 24 hours you can see spikes in both network and cpu usage despite the app being relatively small it pretty much crushed any amount of VMs spinning up unfortunately

Postgres won’t scale horizontally, unfortunately. For running more than about 2 VMs you’ll probably need 2GB+ of RAM. Postgres connections eat up a bunch of RAM.

We have automated layer 3/4 DDoS protection, but nothing at layer 7. It’s probably worth putting your own rate limits in place, either with a separate nginx or directly in your app.

What framework/runtime are you using?

I’m using Laravel for my main app. How would I go about rate limiting at the nginx level? Also thanks for the tips on postgres ! Id be happy to share anything about my setup or VMs if it meant being able to handle http attacks. Id like to not use cloudflare and strictly use you guys but I’m definitely seeing tenants on my vms being attacked

Not sure I replied to you directly since I’m on mobile apologies for bumping this if I did. I’m assuming rate limiting at an nginx level would do me more good than rate limiting with Laravel. Let me know what you would propose as a solution and I’d love to get it done


You can rate limit in either layer (Nginx or Laravel).

One thing to keep in mind is that it’s easier to (and still works great for most dos attacks in fly) if you rate limit per VM, so you don’t need to run a redis instance or something extra just for rate limiting (So you don’t need some central storage that tracks requests across every VM in every region).

Here’s how to do that in Nginx: NGINX Rate Limiting
You can set a custom header to rate limit on, in this case you’ll want to use header fly-client-ip:

limit_req_zone $http_fly_client_ip zone=zone:16m rate=10r/s;

(You’ll want to decide on the actual numbers you use for your config).

For Laravel, you can use the rate limiter middleware and configure it as documented. I would use the file based cache driver for the middleware driver.

(I’d prefer Nginx for this use case myself, but use whatever works best for your!)

1 Like

Would i be putting this limit_req_zone inside the server section in my default nginx conf with the laravel preset for fly? sorry for the hand holding but definitely have never edited nginx configs much. thanks!

Definitely review the docs I linked: NGINX Rate Limiting

(because there’s more to it than what I pasted).

You would put that in the nginx configuration file found in docker/nginx-default (or swoole version if you use Swoole) in your code base, which the fly launch command created for you (the exact file to edit depends a bit on when you ran fly launch, as it’s changed a bit over time).

The Nginx config customization that goes in the location {} block in those docs above can go into any location block used there (there are 2 by default - location / {} and location ~ \.php$ {}

When I tried adding limit_req_zone into my main location block however it said the directive was not allowed. There error I was getting:

nginx: [emerg] "limit_req_zone" directive is not allowed here in /etc/nginx/sites-enabled/default:22

I then moved the limit_req_zone outside of the server block and it seems to work. Is this how I’m intended to register the limit req zone? I also added the limit_req keyword to enable it in the default location block and that seems to work, well atleast seems to not error.

A bit unsure if I’m setting this up right when moving limit_req_zone directive out of the nginx-default server block

Looks like limit_req_zone only goes in the http {} context (as per Module ngx_http_limit_req_module - Note how each configuration items has a Context, which are the config blocks the configuration is allowed within).

So, in the nginx configuration file, I believe it needs to be:

# This is not inside the server{} block
limit_req_zone = whatever

server {
    # stuff omitted
    location / {
        limit_req zone=whatever_you_named_your_zone;
        try_files ...;

   # Do the same within the `location \.php$ {}` block

thanks for the help and explanation as always! appreciate the life saving support you give in the laravel sector of fly

1 Like