Sockudo uses pluggable backends for application management, caching, and queue processing. This allows you to choose the right storage solution for your infrastructure.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/sockudo/sockudo/llms.txt
Use this file to discover all available pages before exploring further.
Backend Types
Sockudo has three types of backends:- App Manager - Stores application credentials and configuration
- Cache - Caches frequently accessed data (app configs, rate limits)
- Queue - Processes background jobs (webhooks, batched events)
App Manager
The app manager stores application credentials, API keys, and per-app configuration.Available Drivers
Memory
In-memory storage from config file. Fast but not persistent.
MySQL
MySQL/MariaDB persistent storage.
PostgreSQL
PostgreSQL persistent storage.
DynamoDB
AWS DynamoDB serverless storage.
ScyllaDB
ScyllaDB high-performance storage.
Memory App Manager
The memory app manager loads applications from your config file. It’s suitable for:- Development and testing
- Single-app deployments with static configuration
- Scenarios where app config rarely changes
Configuration
Environment Variables
MySQL App Manager
Store applications in MySQL for persistent, shared configuration across multiple Sockudo instances.Configuration
Environment Variables
MySQL server hostname.
MySQL server port.
MySQL username.
MySQL password.
MySQL database name.
Table name for storing application data.
Minimum connection pool size.
Maximum connection pool size.
PostgreSQL App Manager
Configuration
DynamoDB App Manager
AWS DynamoDB provides serverless, scalable application storage.Configuration
AWS region for DynamoDB.
DynamoDB table name.
Custom endpoint URL (for LocalStack or VPC endpoints).
App Manager Cache
Enable caching of app configurations to reduce database load:Enable caching of app configurations.
Cache TTL in seconds (5 minutes by default).
Cache Backend
The cache backend stores frequently accessed data like app configurations, rate limit counters, and channel metadata.Available Drivers
memory- In-memory LRU cache (single instance only)redis- Redis cache (supports horizontal scaling)redis-cluster- Redis Cluster cachenone- Disable caching
Memory Cache
Default TTL for cached items in seconds.
Interval for cleaning up expired cache entries in seconds.
Maximum number of items to store in cache before evicting least recently used.
Redis Cache
Redis Cluster Cache
Queue Backend
The queue backend processes background jobs like webhook delivery and batched event processing.Available Drivers
memory- In-memory queue (single instance, not persistent)redis- Redis-backed queue (persistent, supports horizontal scaling)redis-cluster- Redis Cluster queuesqs- AWS SQS queue (serverless, highly scalable)none- Disable queue processing
Memory Queue
Redis Queue
Number of concurrent queue workers.
Redis key prefix for queue data.
Redis Cluster Queue
AWS SQS Queue
AWS region for SQS.
Message visibility timeout in seconds.
Maximum messages to receive per batch.
Long polling wait time in seconds.
Number of concurrent SQS workers.
Use FIFO queue (guarantees message order).
Rate Limiter Backend
The rate limiter tracks API request counts to enforce rate limits.Enable rate limiting.
Backend driver:
memory, redis, or redis-cluster.Maximum API requests per window.
Rate limit window in seconds.
Number of reverse proxy hops to trust for client IP detection.
Feature Compilation
Backends must be compiled into the binary. Use Cargo features:Connection Pooling
Database connections use pooling for optimal performance:Minimum connections to maintain in pool.
Maximum connections in pool.
Best Practices
Development
- Use
memorydrivers for fastest setup - Use Docker Compose for local Redis/MySQL
- Enable caching with short TTLs for quick iteration
Production
- Use persistent storage (MySQL, PostgreSQL, DynamoDB) for app manager
- Use Redis/Redis Cluster for cache and queue in multi-instance deployments
- Enable app manager caching to reduce database load
- Configure connection pooling based on expected load
- Use SQS for queue in AWS environments (serverless, auto-scaling)
- Enable rate limiting with Redis backend for accurate limits across instances
High Availability
- Use Redis Sentinel or Redis Cluster for cache/queue
- Use database replication for app manager
- Configure appropriate retry policies for queue workers
- Monitor queue depth and processing latency
Next Steps
SSL/TLS Configuration
Enable HTTPS with certificates
WebSocket Settings
Configure connection buffers and limits