Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/sockudo/sockudo/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Backup and disaster recovery are critical for production Sockudo deployments. This guide covers backing up application configuration, database backends, and implementing disaster recovery procedures.

What to Back Up

Sockudo deployment backups should include:
  1. Application Configuration - config.json, environment variables
  2. Application Data - App definitions (if using database backends)
  3. SSL Certificates - Certificate and key files
  4. Database State - MySQL, PostgreSQL, or DynamoDB app data
  5. Redis Data - Persistent data (if using Redis persistence)
WebSocket connections and in-flight messages are ephemeral and cannot be backed up. Clients will reconnect after restore.

Backing Up Configuration

Configuration Files

Backup config.json:
#!/bin/bash
# backup-config.sh

BACKUP_DIR="/var/backups/sockudo"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Backup main configuration
cp /app/config/config.json $BACKUP_DIR/config_$DATE.json

# Backup environment file (if used)
cp /app/.env $BACKUP_DIR/env_$DATE.backup

# Backup SSL certificates
cp -r /app/ssl $BACKUP_DIR/ssl_$DATE/

echo "Configuration backed up to $BACKUP_DIR"

Environment Variables

Export environment variables:
#!/bin/bash
# backup-env.sh

BACKUP_FILE="/var/backups/sockudo/env_$(date +%Y%m%d_%H%M%S).env"

# Export Sockudo-specific environment variables
env | grep -E "^(SOCKUDO_|DATABASE_|REDIS_|ADAPTER_|CACHE_|QUEUE_|RATE_|SSL_|CORS_|CLEANUP_|WEBSOCKET_|METRICS_|NATS_|AWS_)" > $BACKUP_FILE

echo "Environment variables backed up to $BACKUP_FILE"

Version Control

Store configuration in Git (recommended):
# Initialize git repository for config
cd /app/config
git init

# Create .gitignore to exclude secrets
cat > .gitignore <<EOF
.env
*.key
*.pem
*_secret*
EOF

# Commit configuration
git add config.json
git commit -m "Initial Sockudo configuration"

# Push to private repository
git remote add origin git@github.com:your-org/sockudo-config.git
git push -u origin main
Never commit secrets, private keys, or passwords to version control, even in private repositories.

Backing Up Application Data

Memory Backend (No Backup Needed)

If using the memory app manager driver, apps are defined in config.json. Back up the configuration file:
cp /app/config/config.json /var/backups/sockudo/config.json

MySQL Backend

Backup all applications:
#!/bin/bash
# backup-mysql.sh

BACKUP_DIR="/var/backups/sockudo/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
DB_NAME="sockudo"

mkdir -p $BACKUP_DIR

# Dump applications table
mysqldump -h $DATABASE_MYSQL_HOST \
  -u $DATABASE_MYSQL_USERNAME \
  -p$DATABASE_MYSQL_PASSWORD \
  $DB_NAME applications \
  > $BACKUP_DIR/applications_$DATE.sql

# Compress backup
gzip $BACKUP_DIR/applications_$DATE.sql

echo "MySQL backup saved to $BACKUP_DIR/applications_$DATE.sql.gz"
Automated backup with cron:
# Add to crontab: Daily backup at 2 AM
0 2 * * * /opt/sockudo/scripts/backup-mysql.sh
Restore MySQL backup:
#!/bin/bash
# restore-mysql.sh

BACKUP_FILE="$1"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file.sql.gz>"
  exit 1
fi

# Decompress and restore
gunzip -c $BACKUP_FILE | mysql \
  -h $DATABASE_MYSQL_HOST \
  -u $DATABASE_MYSQL_USERNAME \
  -p$DATABASE_MYSQL_PASSWORD \
  $DATABASE_MYSQL_DATABASE

echo "MySQL backup restored from $BACKUP_FILE"

PostgreSQL Backend

Backup applications:
#!/bin/bash
# backup-postgres.sh

BACKUP_DIR="/var/backups/sockudo/postgres"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Dump applications table
PGPASSWORD=$DATABASE_POSTGRES_PASSWORD pg_dump \
  -h $DATABASE_POSTGRES_HOST \
  -U $DATABASE_POSTGRES_USERNAME \
  -d $DATABASE_POSTGRES_DATABASE \
  -t applications \
  > $BACKUP_DIR/applications_$DATE.sql

# Compress backup
gzip $BACKUP_DIR/applications_$DATE.sql

echo "PostgreSQL backup saved to $BACKUP_DIR/applications_$DATE.sql.gz"
Restore PostgreSQL backup:
#!/bin/bash
# restore-postgres.sh

BACKUP_FILE="$1"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file.sql.gz>"
  exit 1
fi

# Decompress and restore
gunzip -c $BACKUP_FILE | PGPASSWORD=$DATABASE_POSTGRES_PASSWORD psql \
  -h $DATABASE_POSTGRES_HOST \
  -U $DATABASE_POSTGRES_USERNAME \
  -d $DATABASE_POSTGRES_DATABASE

echo "PostgreSQL backup restored from $BACKUP_FILE"

DynamoDB Backend

Backup DynamoDB table:
#!/bin/bash
# backup-dynamodb.sh

BACKUP_DIR="/var/backups/sockudo/dynamodb"
DATE=$(date +%Y%m%d_%H%M%S)
TABLE_NAME="sockudo-applications"

mkdir -p $BACKUP_DIR

# Export to S3 (requires AWS Data Pipeline or manual export)
aws dynamodb create-backup \
  --table-name $TABLE_NAME \
  --backup-name sockudo-backup-$DATE

echo "DynamoDB backup created: sockudo-backup-$DATE"
Alternative: Export to JSON:
#!/bin/bash
# export-dynamodb.sh

BACKUP_DIR="/var/backups/sockudo/dynamodb"
DATE=$(date +%Y%m%d_%H%M%S)
TABLE_NAME="sockudo-applications"

mkdir -p $BACKUP_DIR

# Scan all items
aws dynamodb scan \
  --table-name $TABLE_NAME \
  --output json \
  > $BACKUP_DIR/applications_$DATE.json

# Compress
gzip $BACKUP_DIR/applications_$DATE.json

echo "DynamoDB data exported to $BACKUP_DIR/applications_$DATE.json.gz"
Restore DynamoDB backup:
#!/bin/bash
# restore-dynamodb.sh

BACKUP_NAME="$1"

if [ -z "$BACKUP_NAME" ]; then
  echo "Usage: $0 <backup-name>"
  exit 1
fi

# Restore from backup
aws dynamodb restore-table-from-backup \
  --target-table-name sockudo-applications \
  --backup-arn $(aws dynamodb describe-backup --backup-name $BACKUP_NAME --query 'BackupDescription.BackupArn' --output text)

echo "DynamoDB table restored from backup: $BACKUP_NAME"

Redis Data Backup

Redis Persistence

Configure Redis persistence if using Redis as adapter/cache: redis.conf:
# Enable RDB snapshots
save 900 1
save 300 10
save 60 10000

# RDB file location
dir /var/lib/redis
dbfilename dump.rdb

# Enable AOF (optional, for durability)
appendonly yes
appendfilename "appendonly.aof"

Backup Redis Data

Manual backup:
#!/bin/bash
# backup-redis.sh

BACKUP_DIR="/var/backups/sockudo/redis"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Trigger Redis save
redis-cli -h $DATABASE_REDIS_HOST -a $DATABASE_REDIS_PASSWORD BGSAVE

# Wait for save to complete
while [ $(redis-cli -h $DATABASE_REDIS_HOST -a $DATABASE_REDIS_PASSWORD LASTSAVE) -eq $(redis-cli -h $DATABASE_REDIS_HOST -a $DATABASE_REDIS_PASSWORD LASTSAVE) ]; do
  sleep 1
done

# Copy dump file
cp /var/lib/redis/dump.rdb $BACKUP_DIR/dump_$DATE.rdb
gzip $BACKUP_DIR/dump_$DATE.rdb

echo "Redis backup saved to $BACKUP_DIR/dump_$DATE.rdb.gz"
Restore Redis backup:
#!/bin/bash
# restore-redis.sh

BACKUP_FILE="$1"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file.rdb.gz>"
  exit 1
fi

# Stop Redis
systemctl stop redis

# Restore dump file
gunzip -c $BACKUP_FILE > /var/lib/redis/dump.rdb
chown redis:redis /var/lib/redis/dump.rdb

# Start Redis
systemctl start redis

echo "Redis backup restored from $BACKUP_FILE"
If using Redis Cluster, you must back up each cluster node separately.

SSL Certificates

Backup certificates:
#!/bin/bash
# backup-ssl.sh

BACKUP_DIR="/var/backups/sockudo/ssl"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR/$DATE

# Backup SSL certificates
cp -r /etc/ssl/certs/sockudo.crt $BACKUP_DIR/$DATE/
cp -r /etc/ssl/private/sockudo.key $BACKUP_DIR/$DATE/

# If using Let's Encrypt
cp -r /etc/letsencrypt/live $BACKUP_DIR/$DATE/letsencrypt/

# Encrypt backup (recommended)
tar czf - $BACKUP_DIR/$DATE | \
  openssl enc -aes-256-cbc -salt -out $BACKUP_DIR/ssl_$DATE.tar.gz.enc

# Remove unencrypted backup
rm -rf $BACKUP_DIR/$DATE

echo "SSL certificates backed up (encrypted) to $BACKUP_DIR/ssl_$DATE.tar.gz.enc"

Complete Backup Script

Full Sockudo backup:
#!/bin/bash
# backup-sockudo-full.sh

BACKUP_DIR="/var/backups/sockudo"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_PATH="$BACKUP_DIR/full_$DATE"

mkdir -p $BACKUP_PATH

echo "Starting full Sockudo backup..."

# 1. Backup configuration
echo "Backing up configuration..."
cp /app/config/config.json $BACKUP_PATH/
cp /app/.env $BACKUP_PATH/ 2>/dev/null || true

# 2. Backup SSL certificates
echo "Backing up SSL certificates..."
mkdir -p $BACKUP_PATH/ssl
cp -r /app/ssl/* $BACKUP_PATH/ssl/ 2>/dev/null || true

# 3. Backup database (if MySQL)
if [ ! -z "$DATABASE_MYSQL_HOST" ]; then
  echo "Backing up MySQL database..."
  mysqldump -h $DATABASE_MYSQL_HOST \
    -u $DATABASE_MYSQL_USERNAME \
    -p$DATABASE_MYSQL_PASSWORD \
    $DATABASE_MYSQL_DATABASE applications \
    > $BACKUP_PATH/mysql_applications.sql
fi

# 4. Backup Redis
if [ ! -z "$DATABASE_REDIS_HOST" ]; then
  echo "Backing up Redis..."
  redis-cli -h $DATABASE_REDIS_HOST -a $DATABASE_REDIS_PASSWORD BGSAVE
  sleep 5
  cp /var/lib/redis/dump.rdb $BACKUP_PATH/redis_dump.rdb 2>/dev/null || true
fi

# 5. Create compressed archive
echo "Creating compressed archive..."
tar czf $BACKUP_DIR/sockudo_backup_$DATE.tar.gz -C $BACKUP_PATH .

# 6. Cleanup temporary files
rm -rf $BACKUP_PATH

echo "Full backup completed: $BACKUP_DIR/sockudo_backup_$DATE.tar.gz"

Automated Backup Schedule

Create systemd timer:
# /etc/systemd/system/sockudo-backup.service
[Unit]
Description=Sockudo Backup Service
After=network.target

[Service]
Type=oneshot
ExecStart=/opt/sockudo/scripts/backup-sockudo-full.sh
User=root
# /etc/systemd/system/sockudo-backup.timer
[Unit]
Description=Sockudo Backup Timer

[Timer]
OnCalendar=daily
OnCalendar=02:00
Persistent=true

[Install]
WantedBy=timers.target
Enable timer:
sudo systemctl daemon-reload
sudo systemctl enable sockudo-backup.timer
sudo systemctl start sockudo-backup.timer

# Check timer status
sudo systemctl status sockudo-backup.timer

Backup Retention

Cleanup old backups:
#!/bin/bash
# cleanup-old-backups.sh

BACKUP_DIR="/var/backups/sockudo"
RETENTION_DAYS=30

echo "Cleaning up backups older than $RETENTION_DAYS days..."

find $BACKUP_DIR -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
find $BACKUP_DIR -name "*.rdb.gz" -mtime +$RETENTION_DAYS -delete

echo "Cleanup completed"

Disaster Recovery

Recovery Procedure

1. Provision new server:
# Install Sockudo
curl -fsSL https://get.sockudo.io | bash
2. Restore configuration:
#!/bin/bash
# restore-full.sh

BACKUP_FILE="$1"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file.tar.gz>"
  exit 1
fi

echo "Restoring from backup: $BACKUP_FILE"

# Extract backup
RESTORE_DIR="/tmp/sockudo-restore"
mkdir -p $RESTORE_DIR
tar xzf $BACKUP_FILE -C $RESTORE_DIR

# Restore configuration
cp $RESTORE_DIR/config.json /app/config/
cp $RESTORE_DIR/.env /app/ 2>/dev/null || true

# Restore SSL certificates
cp -r $RESTORE_DIR/ssl/* /app/ssl/

# Restore database
if [ -f "$RESTORE_DIR/mysql_applications.sql" ]; then
  echo "Restoring MySQL database..."
  mysql -h $DATABASE_MYSQL_HOST \
    -u $DATABASE_MYSQL_USERNAME \
    -p$DATABASE_MYSQL_PASSWORD \
    $DATABASE_MYSQL_DATABASE < $RESTORE_DIR/mysql_applications.sql
fi

# Restore Redis
if [ -f "$RESTORE_DIR/redis_dump.rdb" ]; then
  echo "Restoring Redis..."
  systemctl stop redis
  cp $RESTORE_DIR/redis_dump.rdb /var/lib/redis/dump.rdb
  chown redis:redis /var/lib/redis/dump.rdb
  systemctl start redis
fi

# Cleanup
rm -rf $RESTORE_DIR

echo "Restore completed. Restart Sockudo to apply changes."
3. Restart Sockudo:
systemctl restart sockudo

# Verify service is running
systemctl status sockudo

# Check health
curl http://localhost:6001/up/app-id

Testing Backups

Regular backup testing:
#!/bin/bash
# test-backup.sh

BACKUP_FILE="$1"

echo "Testing backup integrity: $BACKUP_FILE"

# Test archive integrity
tar tzf $BACKUP_FILE > /dev/null
if [ $? -eq 0 ]; then
  echo "✓ Archive integrity OK"
else
  echo "✗ Archive is corrupted"
  exit 1
fi

# Test SQL files
for sql_file in $(tar tzf $BACKUP_FILE | grep \.sql$); do
  echo "Validating SQL: $sql_file"
  tar xzf $BACKUP_FILE $sql_file -O | mysql --force > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    echo "✓ SQL file valid"
  else
    echo "✗ SQL file has errors"
  fi
done

echo "Backup test completed"

Off-site Backup Storage

AWS S3

#!/bin/bash
# backup-to-s3.sh

BACKUP_FILE="$1"
S3_BUCKET="s3://my-sockudo-backups"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file>"
  exit 1
fi

# Upload to S3
aws s3 cp $BACKUP_FILE $S3_BUCKET/$(basename $BACKUP_FILE)

echo "Backup uploaded to $S3_BUCKET"

Google Cloud Storage

#!/bin/bash
# backup-to-gcs.sh

BACKUP_FILE="$1"
GCS_BUCKET="gs://my-sockudo-backups"

if [ -z "$BACKUP_FILE" ]; then
  echo "Usage: $0 <backup-file>"
  exit 1
fi

# Upload to GCS
gsutil cp $BACKUP_FILE $GCS_BUCKET/$(basename $BACKUP_FILE)

echo "Backup uploaded to $GCS_BUCKET"

Backup Best Practices

  1. Regular Schedule: Back up daily, keep 30 days of backups
  2. Test Restores: Test backup restoration monthly
  3. Off-site Storage: Store backups in different geographic location
  4. Encrypt Backups: Encrypt sensitive data before storage
  5. Automate: Use systemd timers or cron for automation
  6. Monitor: Alert on backup failures
  7. Document: Keep recovery procedures documented and tested
  8. Version Control: Store configuration in Git (exclude secrets)
  9. Retention Policy: Define and enforce backup retention periods
  10. Security: Restrict backup access with appropriate permissions

Recovery Time Objectives

ComponentTypical RTOTypical RPO
Configuration< 5 minutesLast backup
MySQL/PostgreSQL< 15 minutesLast backup
Redis< 10 minutesLast backup
SSL Certificates< 5 minutesLast backup
Full System< 30 minutesLast backup
Active WebSocket connections cannot be restored. Clients will automatically reconnect after service is restored.

Next Steps