Documentation Index
Fetch the complete documentation index at: https://mintlify.com/sockudo/sockudo/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Rate limiting protects your Sockudo server from abuse and ensures fair resource allocation across applications. It provides per-application limits for HTTP API requests, preventing any single app from overwhelming the server. Key Benefits:- Prevents API abuse and DoS attacks
- Ensures fair resource allocation
- Configurable per-application limits
- Multiple backend drivers (memory, Redis, Redis Cluster)
Quick Start
Basic Configuration
config/config.json
Environment Variable
Configuration
Global Settings
| Option | Values | Description |
|---|---|---|
enabled | true/false | Enable/disable rate limiting globally |
driver | memory, redis, redis-cluster, none | Rate limiter backend |
Per-App Limits
Configure rate limits for individual applications:| Option | Default | Description |
|---|---|---|
max_client_events_per_second | 1000 | Maximum events per second per app |
Rate Limiter Drivers
Memory Driver (Default)
Best for: Single-node deployments, development, testing- In-memory counters (fast)
- No external dependencies
- Per-node limits (not shared across instances)
- Lost on server restart
- Running single Sockudo instance
- Development/testing environment
- Low-traffic applications
Redis Driver
Best for: Multi-node deployments with shared limits- Shared counters across all nodes
- Cluster-wide rate limits
- Persistent across restarts (with Redis persistence)
- Slightly higher latency (~1-2ms)
- Running multiple Sockudo instances
- Need cluster-wide rate limits
- Production deployments
Redis Cluster Driver
Best for: High-availability deployments with Redis Cluster- Distributed across Redis Cluster
- High availability
- Automatic failover
- Horizontal scalability
- High-availability requirements
- Large-scale deployments
- Redis Cluster infrastructure
None Driver
Best for: Disabling rate limiting- Behind external rate limiter (e.g., nginx, API gateway)
- Trusted internal network
- Development with no limits
Rate Limiting Behavior
HTTP API Endpoints
Rate limiting is enforced on these endpoints:| Endpoint | Rate Limited | Limit Type |
|---|---|---|
POST /apps/:app_id/events | ✅ Yes | Per-app |
POST /apps/:app_id/batch_events | ✅ Yes | Per-app |
GET /apps/:app_id/channels | ✅ Yes | Per-app |
GET /apps/:app_id/channels/:channel | ✅ Yes | Per-app |
GET /apps/:app_id/channels/:channel/users | ✅ Yes | Per-app |
Health endpoints (/up/:app_id) | ❌ No | N/A |
Metrics endpoint (/metrics) | ❌ No | N/A |
Rate Limit Algorithm
Sockudo uses a token bucket algorithm:- Each app has a bucket with max capacity =
max_client_events_per_second - Tokens refill at rate of
max_client_events_per_secondper second - Each request consumes 1 token
- If no tokens available, request is rejected with
429 Too Many Requests
Response Headers
Rate limit information is included in response headers:| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests per second |
X-RateLimit-Remaining | Tokens remaining in current window |
X-RateLimit-Reset | Unix timestamp when limit resets |
Rate Limit Exceeded
When rate limit is exceeded, the server returns:- Wait until
X-RateLimit-Resettimestamp - Implement exponential backoff
- Queue requests client-side
Use Cases
1. Preventing API Abuse
Problem: Malicious client making excessive API calls Solution:2. Fair Resource Allocation
Problem: One app consuming all server resources Solution:3. Multi-Tenant SaaS
Problem: Need different limits per customer Solution:4. Development vs Production
Problem: Different limits for different environments Solution:Best Practices
1. Set Appropriate Limits
Too low: Legitimate traffic gets blocked Too high: Doesn’t prevent abuse Just right: Allows normal usage, blocks abuse2. Use Redis for Multi-Node Deployments
3. Monitor Rate Limit Metrics
Track rate limit hits via Prometheus metrics:4. Implement Client-Side Backoff
5. Consider External Rate Limiters
For advanced scenarios, use nginx or API gateway:Monitoring
Prometheus Metrics
Logs
Troubleshooting
Rate Limits Not Enforced
Check 1: Is rate limiting enabled?Legitimate Traffic Blocked
Symptom: Users reporting429 errors during normal usage
Solution: Increase per-app limit
Multi-Node Limit Multiplication
Symptom: Effective limit is higher than configured (e.g., 3000 instead of 1000) Cause: Using memory driver with 3 nodes (1000 × 3 = 3000) Solution: Switch to Redis driver for shared limitsRedis Connection Issues
Symptom: Rate limiting not working with Redis driver Check Redis connection:Migration Guide
Enabling Rate Limiting
1. Enable in configuration:Switching from Memory to Redis
1. Configure Redis connection:Next Steps
Webhooks
Configure event notifications with batching
Presence Channels
Track online users with presence channels