API Rate Limiting and DDoS Protection with Express
Comprehensive guide to implementing rate limiting, request throttling, and DDoS protection strategies for Node.js Express APIs in production.
API Rate Limiting and DDoS Protection with Express
Rate limiting prevents abuse, ensures fair resource allocation, and protects against brute-force attacks and DDoS attempts. This guide covers implementation strategies from basic to enterprise-grade protection.
Why Rate Limiting Matters?
Without rate limiting, a single attacker can exhaust server resources, degrading service for legitimate users. Rate limiting enforces quotas per IP, user, or endpoint, ensuring predictable resource consumption.
Prerequisites
- Node.js 18+ with Express
- Redis (for distributed systems)
- Basic understanding of HTTP headers
- Postman or similar for testing
Step 1: Install Dependencies
npm install express-rate-limit helmet cors
npm install -D @types/express-rate-limit
For production with Redis:
npm install redis rate-limit-redis
Step 2: Basic IP-Based Rate Limiting
Create src/middleware/rateLimit.ts:
import rateLimit from 'express-rate-limit';
import { Request, Response } from 'express';
// Basic in-memory store (suitable for single server)
export const basicLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many requests from this IP, please try again later.',
standardHeaders: true, // Return rate limit info in `RateLimit-*` headers
legacyHeaders: false, // Disable `X-RateLimit-*` headers
skip: (req: Request) => {
// Skip rate limiting for health checks
return req.path === '/health';
},
keyGenerator: (req: Request) => {
// Use X-Forwarded-For behind proxy
return req.ip || req.socket.remoteAddress || 'unknown';
},
handler: (req: Request, res: Response) => {
res.status(429).json({
error: 'Too many requests',
retryAfter: req.rateLimit?.resetTime,
});
},
});
// Stricter limit for login attempts
export const loginLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 5, // 5 attempts per 15 minutes
message: 'Too many login attempts, please try again later.',
skipSuccessfulRequests: true, // Don't count successful requests
skipFailedRequests: false,
});
// Strict limit for password reset
export const passwordResetLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 3,
message: 'Too many password reset attempts, please try again later.',
});
// API endpoint protection
export const apiLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 30,
message: 'Too many API requests from this IP, please try again later.',
});
Step 3: Advanced Redis-Based Rate Limiting
For distributed systems, use Redis:
Create src/middleware/redisRateLimit.ts:
import rateLimit from 'express-rate-limit';
import RedisStore from 'rate-limit-redis';
import { createClient } from 'redis';
const redisClient = createClient({
socket: {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
},
password: process.env.REDIS_PASSWORD,
});
redisClient.connect();
// Production-grade limiter with Redis store
export const productionLimiter = rateLimit({
store: new RedisStore({
client: redisClient,
prefix: 'rl:', // Rate limit prefix
}),
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
standardHeaders: true,
legacyHeaders: false,
// Skip certain paths
skip: (req) => req.path === '/health' || req.path === '/status',
});
// Per-user rate limiting (for authenticated requests)
export const userRateLimiter = rateLimit({
store: new RedisStore({
client: redisClient,
prefix: 'user-rl:',
}),
windowMs: 60 * 60 * 1000, // 1 hour
max: 1000,
keyGenerator: (req) => {
// Use user ID instead of IP for authenticated requests
return req.user?.id || req.ip || 'anonymous';
},
skip: (req) => !req.user, // Only apply to authenticated users
});
// Endpoint-specific protection
export const expensiveOperationLimiter = rateLimit({
store: new RedisStore({
client: redisClient,
prefix: 'expensive:',
}),
windowMs: 60 * 1000, // 1 minute
max: 10, // Strict limit
message: 'This operation is rate limited. Please try again later.',
});
Step 4: Custom Rate Limit Strategy
Create sliding window implementation (src/utils/slidingWindow.ts):
import { getRedis } from '../lib/redis';
interface RateLimitConfig {
key: string;
limit: number;
windowMs: number;
}
export async function checkRateLimit(config: RateLimitConfig): Promise<{
allowed: boolean;
current: number;
limit: number;
resetTime: Date;
}> {
const redis = getRedis();
const now = Date.now();
const windowStart = now - config.windowMs;
const key = `sliding:${config.key}`;
// Remove old entries outside window
await redis.zremrangebyscore(key, 0, windowStart);
// Get current count
const current = await redis.zcard(key);
if (current < config.limit) {
// Add current request
await redis.zadd(key, now, `${now}-${Math.random()}`);
// Set expiry
await redis.expire(key, Math.ceil(config.windowMs / 1000));
const resetTime = new Date(now + config.windowMs);
return {
allowed: true,
current: current + 1,
limit: config.limit,
resetTime,
};
}
// Get oldest entry to calculate reset time
const oldest = await redis.zrange(key, 0, 0, { withScores: true });
const oldestTime = oldest[0]?.score || now;
const resetTime = new Date(oldestTime + config.windowMs);
return {
allowed: false,
current,
limit: config.limit,
resetTime,
};
}
export async function resetRateLimit(key: string): Promise<void> {
const redis = getRedis();
await redis.del(`sliding:${key}`);
}
Step 5: DDoS Detection Middleware
Create src/middleware/ddosDetection.ts:
import { Request, Response, NextFunction } from 'express';
import { getRedis } from '../lib/redis';
interface IPStats {
requestCount: number;
lastSeen: number;
blocked: boolean;
}
const DETECTION_CONFIG = {
windowMs: 60 * 1000, // 1 minute
requestsPerMinute: 1000,
blockDurationMs: 15 * 60 * 1000, // 15 minutes
};
export async function ddosDetection(
req: Request,
res: Response,
next: NextFunction
) {
const redis = getRedis();
const ip = req.ip || req.socket.remoteAddress || 'unknown';
const statsKey = `ddos:${ip}`;
const blockKey = `blocked:${ip}`;
try {
// Check if IP is blocked
const isBlocked = await redis.get(blockKey);
if (isBlocked) {
return res.status(429).json({
error: 'Your IP has been temporarily blocked due to suspicious activity.',
});
}
// Get current stats
const stats = await redis.get(statsKey);
const currentStats: IPStats = stats ? JSON.parse(stats) : { requestCount: 0, lastSeen: 0 };
const now = Date.now();
// Reset if outside window
if (now - currentStats.lastSeen > DETECTION_CONFIG.windowMs) {
currentStats.requestCount = 0;
}
currentStats.requestCount++;
currentStats.lastSeen = now;
// Detect attack pattern
if (currentStats.requestCount > DETECTION_CONFIG.requestsPerMinute) {
console.warn(`[DDoS] Blocking IP ${ip} - ${currentStats.requestCount} requests/min`);
// Block this IP
await redis.setex(
blockKey,
Math.ceil(DETECTION_CONFIG.blockDurationMs / 1000),
'true'
);
// Alert (integrate with your monitoring)
logSecurityEvent({
type: 'DDoS_DETECTED',
ip,
requestCount: currentStats.requestCount,
});
return res.status(429).json({
error: 'Too many requests. Your IP has been temporarily blocked.',
});
}
// Update stats
await redis.setex(statsKey, Math.ceil(DETECTION_CONFIG.windowMs / 1000), JSON.stringify(currentStats));
// Add headers
res.setHeader('X-RateLimit-Remaining', DETECTION_CONFIG.requestsPerMinute - currentStats.requestCount);
next();
} catch (error) {
console.error('DDoS detection error:', error);
next(); // Continue on error, don't block users
}
}
function logSecurityEvent(event: any) {
// Implement your logging/monitoring here
console.log('[SECURITY]', JSON.stringify(event));
}
Step 6: Implement in Express App
Create main app with rate limiting (src/index.ts):
import express from 'express';
import helmet from 'helmet';
import cors from 'cors';
import { basicLimiter, loginLimiter, apiLimiter } from './middleware/rateLimit';
import { ddosDetection } from './middleware/ddosDetection';
import { authenticateToken } from './middleware/auth';
import authRoutes from './routes/auth';
import apiRoutes from './routes/api';
const app = express();
// Security middleware
app.use(helmet());
app.use(cors());
// Trust proxy (important for rate limiting behind reverse proxy)
app.set('trust proxy', 1);
// DDoS Detection - first line of defense
app.use(ddosDetection);
// Body parsing
app.use(express.json({ limit: '10mb' }));
app.use(express.urlencoded({ extended: true, limit: '10mb' }));
// Global rate limiter
app.use(basicLimiter);
// Health check (excluded from rate limiting)
app.get('/health', (req, res) => {
res.json({ status: 'ok' });
});
// Status endpoint
app.get('/status', (req, res) => {
res.json({ status: 'operational' });
});
// Authentication routes with stricter rate limiting
app.post('/auth/register', loginLimiter, async (req, res) => {
// Registration logic
res.json({ message: 'User registered' });
});
app.post('/auth/login', loginLimiter, async (req, res) => {
// Login logic
res.json({ message: 'Logged in successfully' });
});
app.post('/auth/password-reset', loginLimiter, async (req, res) => {
// Password reset logic
res.json({ message: 'Reset email sent' });
});
// API routes
app.use('/api/', authenticateToken, apiLimiter, apiRoutes);
// Error handling
app.use((err: any, req: express.Request, res: express.Response, next: express.NextFunction) => {
console.error('Error:', err);
if (err.status === 429) {
return res.status(429).json({
error: 'Too many requests. Please try again later.',
retryAfter: err.retryAfter,
});
}
res.status(500).json({
error: 'Internal server error',
});
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
Step 7: Custom Rate Limit Middleware
Create flexible middleware factory (src/middleware/customRateLimit.ts):
import { Request, Response, NextFunction } from 'express';
import { checkRateLimit } from '../utils/slidingWindow';
interface RateLimitOptions {
limit: number;
windowMs: number;
keyGenerator?: (req: Request) => string;
message?: string;
skipCondition?: (req: Request) => boolean;
}
export function createRateLimiter(options: RateLimitOptions) {
const {
limit,
windowMs,
keyGenerator = (req) => req.ip || 'unknown',
message = 'Too many requests',
skipCondition = () => false,
} = options;
return async (req: Request, res: Response, next: NextFunction) => {
// Skip rate limiting for certain conditions
if (skipCondition(req)) {
return next();
}
try {
const key = keyGenerator(req);
const result = await checkRateLimit({
key,
limit,
windowMs,
});
// Add headers
res.setHeader('X-RateLimit-Limit', result.limit);
res.setHeader('X-RateLimit-Remaining', Math.max(0, result.limit - result.current));
res.setHeader('X-RateLimit-Reset', result.resetTime.toISOString());
if (!result.allowed) {
return res.status(429).json({
error: message,
retryAfter: Math.ceil((result.resetTime.getTime() - Date.now()) / 1000),
});
}
next();
} catch (error) {
console.error('Rate limit check failed:', error);
next(); // Continue on error
}
};
}
// Usage examples:
export const strictLimiter = createRateLimiter({
limit: 5,
windowMs: 60 * 1000,
message: 'Too many requests. Please try again later.',
});
export const moderateLimiter = createRateLimiter({
limit: 50,
windowMs: 60 * 1000,
});
export const userSpecificLimiter = createRateLimiter({
limit: 100,
windowMs: 60 * 60 * 1000,
keyGenerator: (req) => req.user?.id || req.ip || 'anonymous',
skipCondition: (req) => !req.user, // Only for authenticated users
});
Step 8: Monitoring and Analytics
Create rate limit monitoring (src/services/rateLimitMonitoring.ts):
import { getRedis } from '../lib/redis';
export async function getRateLimitStats(key: string) {
const redis = getRedis();
const stats = await redis.get(`rl:${key}`);
return stats ? JSON.parse(stats) : null;
}
export async function getTopBlockedIPs(limit: number = 10) {
const redis = getRedis();
// Scan for blocked IPs
const cursor = await redis.scan(0, {
match: 'blocked:*',
count: 100,
});
const blockedIPs = cursor[1];
return blockedIPs.slice(0, limit);
}
export async function getAPIUsageStats(timeWindowMs: number = 60 * 60 * 1000) {
const redis = getRedis();
// Get all rate limit keys
const keys = await redis.keys('rl:*');
const stats: Record<string, any> = {};
for (const key of keys) {
const data = await redis.get(key);
if (data) {
stats[key] = JSON.parse(data);
}
}
return stats;
}
export function formatRateLimitResponse(error: Error) {
if (error.message.includes('Too many requests')) {
return {
status: 429,
error: 'Rate limit exceeded',
message: 'Please try again later',
};
}
return null;
}
| Protection Type | Purpose | Threshold |
|---|---|---|
| IP-based Rate Limiting | Prevent per-IP abuse | 100 req/min |
| User-based Throttling | Limit authenticated users | 1000 req/hour |
| Endpoint-specific Rules | Protect expensive operations | 10 req/min |
| Sliding Window Counter | Token bucket algorithm | Variable |
| DDoS Detection | Identify attack patterns | Auto-block |
Best Practices
Identify Legitimate Traffic: Whitelist health checks and monitoring
skip: (req) => ['/health', '/metrics', '/status'].includes(req.path)
Use X-Forwarded-For Behind Proxy:
app.set('trust proxy', 1); // Trust single proxy
// Or be specific:
app.set('trust proxy', 'loopback'); // For local testing
Implement Graduated Response:
// Warning - 70% of limit
// Throttle - 85% of limit
// Block - exceed limit
Monitor for Attack Patterns: Track IP reputation
const suspiciousIPs = await getTopBlockedIPs(10);
Communicate Limits to Clients: Clear headers and messages
res.setHeader('RateLimit-Limit', 100);
res.setHeader('RateLimit-Remaining', 45);
res.setHeader('RateLimit-Reset', '2025-02-21T10:30:00Z');
Testing Rate Limits
Test with Apache Bench:
# 100 concurrent requests
ab -n 100 -c 10 http://localhost:3000/api/endpoint
Test with wrk:
# Load testing
wrk -t4 -c100 -d30s http://localhost:3000/api/endpoint
Manual testing with curl:
# Watch rate limit headers
for i in {1..20}; do
curl -i http://localhost:3000/api/endpoint 2>&1 | grep RateLimit
sleep 0.1
done
Useful Resources
- express-rate-limit Documentation
- OWASP Rate Limiting
- Redis Rate Limiting Patterns
- DDoS Mitigation Guide
Conclusion
Implement rate limiting in layers: IP-based, user-based, and endpoint-specific. Use Redis for distributed systems, monitor attack patterns, and communicate limits clearly to legitimate users. This protects your API from abuse while maintaining good user experience.