Enable Dark Mode!
how-nginx-rate-limiting-shields-your-server-from-brute-force-attacks.jpg
By: Advaith B G

How Nginx Rate Limiting Shields Your Server from Brute Force Attacks

Technical

Web applications are constantly under threat from automated scripts and malicious actors. One of the most common attack vectors is the brute force attack, where an attacker repeatedly tries different credentials until they find a valid combination. These attacks not only pose a security risk but can also overwhelm your server resources, leading to performance degradation or even downtime.

Thankfully, if you’re running your website behind Nginx, you already have powerful tools at your disposal to mitigate brute force attempts. One such feature is rate limiting.

In this blog, we’ll explore how to configure Nginx rate limiting step-by-step, with examples and best practices to protect your application.

Why Rate Limiting?

Before diving into configurations, let’s quickly recap why rate limiting matters:

  • Mitigates brute force login attempts: Prevents attackers from trying thousands of password combinations.
  • Protects server resources: Stops abusive clients from consuming excessive CPU or bandwidth.
  • Improves fairness: Ensures no single client hogs your resources.
  • Enhances security posture: Forms part of a layered security strategy alongside firewalls, WAFs, and monitoring.

Nginx Rate Limiting Directives

Nginx provides two main modules for rate limiting:

  1. limit_req_zone and limit_req – Limits the rate of requests (requests per second/minute).
  2. limit_conn_zone and limit_conn – Limits the number of simultaneous connections.

For brute force prevention, we primarily use request limiting since login attempts are discrete requests.

Step 1: Define a Rate Limiting Zone

The first step is defining a shared memory zone that will track client request states. This is done in the http block of your Nginx configuration.

http {
# Define a zone called "one" that tracks requests by client IP
# Uses 10MB of memory, enough for around 160,000 states
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/m;
server {
    listen 80;
    server_name example.com;
    location /login {
        # Apply the "one" limit zone here
        limit_req zone=one burst=10 nodelay;
        proxy_pass http://backend;
    }
}
}

Breakdown

  • $binary_remote_addr – Key used to identify clients (based on IP).
  • zone=one:10m – Creates a zone named one with 10MB storage.
  • rate=5r/m – Limits each IP to 5 requests per minute.
  • burst=10 – Allows short bursts of up to 10 requests before rate limiting kicks in.
  • nodelay – Ensures extra requests are rejected immediately instead of queued.

Step 2: Testing the Rate Limit

You can use curl or a tool like ab (ApacheBench) to test:

# Simulate multiple login attempts
for i in {1..20}; do curl -I http://example.com/login; done

Once the limit is exceeded, Nginx will return 503 Service Unavailable by default:

HTTP/1.1 503 Service Temporarily Unavailable
Server: nginx
Date: Thu, 04 Sep 2025 12:00:00 GMT
Content-Type: text/html
Content-Length: 207
Connection: close

Step 3: Custom Error Page for Blocked Requests

Instead of showing a generic 503 error, you may want to display a custom error message or redirect users.

server {
listen 80;
server_name example.com;
error_page 503 @custom_limit;
location @custom_limit {
    return 429 "Too many requests. Please try again later.";
}
location /login {
    limit_req zone=one burst=10 nodelay;
    proxy_pass http://backend;
}
}

Step 4: Fine-Tuning the Rate Limits

A few tips for tuning:

  • Login endpoints: Use strict limits (e.g., 5 requests/minute).
  • APIs: Set reasonable per-IP limits to avoid abuse.
  • Static assets (CSS/JS/images): Usually don’t need rate limiting.
  • Whitelist trusted IPs: Use allow/deny directives for known admin IPs.

Example of whitelisting:

location /login {
allow 192.168.1.100;   # Trusted admin IP
deny all;
limit_req zone=one burst=10 nodelay;
proxy_pass http://backend;
}

Security Considerations

  • Use HTTPS – Always secure login endpoints with TLS to prevent credential sniffing.
  • Don’t rely solely on rate limiting – Combine with strong passwords, CAPTCHAs, MFA, and intrusion detection.
  • Monitor logs – Keep an eye on /var/log/nginx/access.log and /var/log/nginx/error.log for suspicious activity.
  • Balance usability vs. security – Don’t make limits so strict that real users get blocked unnecessarily.

Rate limiting in Nginx is an effective first line of defense against brute force attacks. By restricting the number of login attempts per IP and rejecting excessive requests, you greatly reduce the attack surface while keeping your server responsive.

To recap, we covered:

  • The importance of rate limiting for brute force protection.
  • How to configure limit_req_zone and limit_req.
  • Custom error handling and IP whitelisting.
  • Enhancing security with Fail2ban.

With just a few configuration lines, you can harden your application significantly. Security is about layers, and Nginx rate limiting is one of the most efficient layers you can implement today.

To read more about How to Cache API Responses with Nginx for Faster Performance, refer to our blog How to Cache API Responses with Nginx for Faster Performance.


If you need any assistance in odoo, we are online, please chat with us.



0
Comments



Leave a comment



whatsapp_icon
location

Calicut

Cybrosys Technologies Pvt. Ltd.
Neospace, Kinfra Techno Park
Kakkancherry, Calicut
Kerala, India - 673635

location

Kochi

Cybrosys Technologies Pvt. Ltd.
1st Floor, Thapasya Building,
Infopark, Kakkanad,
Kochi, India - 682030.

location

Bangalore

Cybrosys Techno Solutions
The Estate, 8th Floor,
Dickenson Road,
Bangalore, India - 560042

Send Us A Message