Documentation Index
Fetch the complete documentation index at: https://mintlify.com/sockudo/sockudo/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Sockudo is designed for high-performance real-time communication. This guide covers configuration options and best practices for tuning Sockudo to handle high concurrent connection loads efficiently.Connection Pool Optimization
Database Connection Pools
Connection pooling is critical for production deployments using external databases. Environment Variables:DynamoDB uses the AWS SDK client which manages its own connection behavior. Pool settings do not apply to DynamoDB.
Redis Connection Pools
| Deployment Size | Database Pool Max | Redis Pool Size |
|---|---|---|
| Small (1-1K connections) | 5-10 | 5 |
| Medium (1K-10K connections) | 10-20 | 10 |
| Large (10K-50K connections) | 20-40 | 20 |
| Extra Large (50K+ connections) | 40-100 | 50 |
WebSocket Buffer Configuration
Sockudo uses bounded buffers to protect against slow consumers that can’t keep up with message delivery.Buffer Limit Modes
Mode 1: Message Count Only (Default - Fastest)Buffer Behavior
- When
disconnect_on_buffer_full: true→ Connection is closed with error code 4100 - When
disconnect_on_buffer_full: false→ New messages are dropped silently (logged as warning)
Performance Characteristics
| Mode | Overhead | Memory Control |
|---|---|---|
| Message-only | Zero (uses bounded channel) | Approximate |
| Byte-only | ~1-2ns per message | Precise |
| Both | Atomic counter + channel check | Most precise |
Memory Estimation
- Message-only mode: ~1-2KB per message (typical)
- Byte-only mode: Exact memory limit (e.g., 1MB = 1MB max)
- 10,000 connections with 1MB byte limit: ~10GB worst case
Cleanup Queue Configuration
The async cleanup queue processes WebSocket disconnections in the background to prevent mass disconnections from blocking new connections.Configuration Options
Deployment Scenarios
Small Deployment (1vCPU/1GB RAM)
Small Deployment (1vCPU/1GB RAM)
Use Case: Development, testing, small production instances
- Memory Usage: ~300KB queue buffer
- CPU Impact: Minimal (1 worker)
- Latency: 100ms max cleanup delay
Standard Deployment (2vCPU/2GB RAM) - Recommended
Standard Deployment (2vCPU/2GB RAM) - Recommended
Use Case: Most common production deployments
- Memory Usage: ~30MB queue buffer per worker
- CPU Impact: Low (auto selects 1 worker for 2vCPU)
- Latency: 50ms max cleanup delay
High-Traffic Deployment (4vCPU/4GB+ RAM)
High-Traffic Deployment (4vCPU/4GB+ RAM)
Use Case: High concurrent connection loads (>10K connections)
- Memory Usage: ~6MB per worker (total: ~12MB with 2 workers)
- CPU Impact: Moderate (2 workers)
- Latency: 25ms max cleanup delay
Ultra High-Traffic Deployment (8vCPU/8GB+ RAM)
Ultra High-Traffic Deployment (8vCPU/8GB+ RAM)
Use Case: Massive scale deployments (>50K connections)
- Memory Usage: ~30MB per worker (total: ~120MB with 4 workers)
- CPU Impact: High (4 workers)
- Latency: 10ms max cleanup delay
Worker Threads Scaling
Theworker_threads setting supports:
- Fixed number: Specify exact worker count (e.g.,
2) - Auto-detection: Use
"auto"to scale based on CPU cores
"auto", the system uses 25% of available CPU cores (minimum 1, maximum 4):
- 1-7 CPUs → 1 worker
- 8-11 CPUs → 2 workers
- 12-15 CPUs → 3 workers
- 16+ CPUs → 4 workers
All configuration values (except
worker_threads) are applied per worker, not as total system capacity.Adapter Performance Tuning
Redis/Redis Cluster
NATS Configuration
Socket Counting
get_sockets_count returns 0 to avoid the overhead of tracking connection counts.
CPU Scaling Considerations
Worker Thread Auto-Scaling
Sockudo automatically scales cleanup workers based on available CPU:Manual CPU Allocation
For fine-grained control:Cache Configuration
Rate Limiting Configuration
Performance Monitoring
Prometheus Metrics
Sockudo exposes metrics at/metrics (port 9601 by default):
Key Metrics to Monitor
sockudo_websocket_connections_total- Total active connectionssockudo_messages_received_total- Incoming message ratesockudo_messages_sent_total- Outgoing message ratesockudo_cleanup_queue_size- Cleanup queue depthsockudo_adapter_operations_duration_seconds- Adapter operation latency
Quick Reference Table
Configuration by Server Size
| Server Spec | queue_buffer_size | batch_size | batch_timeout_ms | worker_threads | pool_max |
|---|---|---|---|---|---|
| 1vCPU/1GB | 500 | 10 | 100 | 1 | 5 |
| 2vCPU/2GB | 50000 | 25 | 50 | auto (1) | 10 |
| 4vCPU/4GB | 10000 | 100 | 25 | 2 | 20 |
| 8vCPU/8GB | 50000 | 500 | 10 | 4 | 40 |
Environment Variables Quick Reference
Best Practices
- Start Conservative: Begin with standard deployment settings and tune based on metrics
- Monitor Actively: Watch queue health and connection latency during initial deployment
- Test Load: Run mass disconnection tests before production
- Use Auto-Scaling: Let
CLEANUP_WORKER_THREADS=autohandle CPU allocation - Profile Regularly: Use Prometheus metrics to identify bottlenecks
- Disable Unused Features: Turn off socket counting if not needed
- Use Redis for Scale: Switch to Redis/Redis Cluster for multi-node deployments
Next Steps
- Troubleshooting - Debug common performance issues
- Monitoring & Metrics - Set up comprehensive monitoring
- Security Best Practices - Secure your deployment