Server Migration with Automatic Backup to Local Server
Server Migration with Automatic Backup to Local Server
This comprehensive guide covers the complete process of migrating a Laravel application to a new VPS server while maintaining an automatic backup system to a local server. This approach ensures maximum data safety, enables easy rollback capabilities, and minimizes downtime during the migration process.
Architecture Overview
System Components
- Old Server (Production)
- Currently running Laravel application
- Active MySQL database
- Live user traffic
- Primary data source
- Local Server (Backup)
- Receives automatic backups from old server
- Acts as disaster recovery site
- Stores multiple backup versions
- Provides rollback capability
- New Server (Target)
- Fresh VPS for application migration
- Will become new production server
- Needs complete setup and configuration
Data Flow Architecture
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Old Server │───▶│ Local Backup │ │ New Server │
│ │ │ Server │ │ │
│ • Laravel App │ │ │ │ • Laravel App │
│ • MySQL DB │ │ • DB Backups │ │ • MySQL DB │
│ • User Files │ │ • App Archives │ │ • User Files │
│ • Live Traffic │ │ • Incremental │ │ • Live Traffic │
└─────────────────┘ │ • Full Backups │ └─────────────────┘
└──────────────────┘
Benefits of This Approach
- Zero Data Loss: Continuous backup ensures no data is lost
- Quick Rollback: Easy restoration from local backup
- Minimal Downtime: Parallel setup and testing
- Risk Mitigation: Multiple backup points
- Disaster Recovery: Local server provides redundancy
Prerequisites
Server Requirements
Local Backup Server:
- Storage: 3x the size of production database
- Network: Reliable connection to production server
- SSH access: Configured for automated transfers
New Target Server:
- Same specifications as old server or better
- SSH access configured
- Network connectivity to local backup server
Software Requirements
# Required packages on all servers
rsync # Efficient file synchronization
mysqldump # Database backup tool
tar # Archive creation
sshpass # Password-based SSH (alternative to keys)
screen/tmux # Session management
Network Requirements
- SSH Connectivity: All servers can reach each other
- Bandwidth: Sufficient for data transfer
- Firewall Rules: Allow SSH and MySQL ports between servers
Phase 1: Setup Automatic Backup System
Step 1: Prepare Local Backup Server
1.1 Create Backup Directory Structure:
# Create backup directories
sudo mkdir -p /data/backup/{database,application,logs,scripts}
# Set proper permissions
sudo chown -R backup:backup /data/backup
sudo chmod -R 755 /data/backup
# Create user for automated backups
sudo useradd -m -s /bin/bash backup
sudo usermod -aG sudo backup
# Setup SSH key authentication
sudo -u backup ssh-keygen -t rsa -b 4096 -C "backup@local-server"
# Copy public key to authorized_keys
sudo -u backup cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
1.2 Configure SSH for Passwordless Access:
# On LOCAL server, generate SSH key for backup user
sudo -u backup ssh-keygen -t rsa -b 4096 -f ~/.ssh/id_rsa -N ""
# Copy public key to OLD server (manual step required)
# This needs to be done manually or through secure channel
echo "Copy this key to old server's backup user:"
sudo -u backup cat ~/.ssh/id_rsa.pub
1.3 Setup Backup Monitoring:
# Create log directory
sudo mkdir -p /var/log/backup
# Create log rotation
cat > /etc/logrotate.d/backup << EOF
/var/log/backup/*.log {
daily
rotate 30
compress
delaycompress
missingok
notifempty
create 644 backup backup
postrotate
systemctl reload rsyslog 2>/dev/null || true
endscript
}
EOF
Step 2: Configure Old Server for Automated Backup
2.1 Create Backup User on Old Server:
# Create backup user
sudo useradd -m -s /bin/bash backup
sudo usermod -aG sudo backup
# Set password for backup user
sudo passwd backup
# Add to sudoers for specific commands
echo "backup ALL=(ALL) NOPASSWD: /usr/bin/mysqldump, /bin/tar, /usr/bin/rsync" >> /etc/sudoers.d/backup
2.2 Setup SSH Key Authentication:
# On OLD server, create .ssh directory for backup user
sudo -u backup mkdir -p ~/.ssh
sudo -u backup chmod 700 ~/.ssh
# Add the public key from local server
# (Paste the public key copied from local server)
sudo -u backup nano ~/.ssh/authorized_keys
# Set proper permissions
sudo -u backup chmod 600 ~/.ssh/authorized_keys
2.3 Test SSH Connection:
# Test connection from old server to local server
sudo -u backup ssh -o StrictHostKeyChecking=no backup@local-server-ip "echo 'SSH connection successful'"
# Add to known_hosts to avoid prompts
sudo -u backup ssh-keyscan -H local-server-ip >> ~/.ssh/known_hosts
Step 3: Create Comprehensive Backup Script
3.1 Create Main Backup Script:
# Create backup script on OLD server
sudo nano /root/backup.sh
# Add comprehensive backup script:
#!/bin/bash
# ==========================================
# Klinik Gunung - Automated Backup Script
# ==========================================
set -e # Exit on any error
# Configuration
BACKUP_USER="backup"
LOCAL_SERVER="local-server-ip"
REMOTE_PATH="/data/backup"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/tmp/backup_$TIMESTAMP"
LOG_FILE="/var/log/backup/backup_$TIMESTAMP.log"
# Database Configuration
DB_HOST="localhost"
DB_USER="root"
DB_PASS="your_mysql_password"
DB_NAME="klinik_gunung"
# Application Configuration
APP_DIR="/var/www/html"
EXCLUDE_PATTERNS=(
"--exclude=vendor/"
"--exclude=node_modules/"
"--exclude=.git/"
"--exclude=storage/logs/*"
"--exclude=storage/framework/cache/*"
"--exclude=bootstrap/cache/*"
)
# Functions
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
error_exit() {
log "ERROR: $1"
exit 1
}
cleanup() {
log "Cleaning up temporary files..."
rm -rf "$BACKUP_DIR"
}
trap cleanup EXIT
# Main execution
log "Starting automated backup process..."
# Create backup directory
mkdir -p "$BACKUP_DIR" || error_exit "Failed to create backup directory"
# Step 1: Backup Database
log "Step 1: Backing up database..."
if ! mysqldump -h"$DB_HOST" -u"$DB_USER" -p"$DB_PASS" \
--single-transaction \
--routines \
--triggers \
--all-databases > "$BACKUP_DIR/database_$TIMESTAMP.sql"; then
error_exit "Database backup failed"
fi
# Compress database backup
gzip "$BACKUP_DIR/database_$TIMESTAMP.sql"
log "Database backup completed: $(du -h "$BACKUP_DIR/database_$TIMESTAMP.sql.gz" | cut -f1)"
# Step 2: Backup Application Files
log "Step 2: Backing up application files..."
if ! tar "${EXCLUDE_PATTERNS[@]}" -czf "$BACKUP_DIR/application_$TIMESTAMP.tar.gz" -C / "$APP_DIR"; then
error_exit "Application backup failed"
fi
log "Application backup completed: $(du -h "$BACKUP_DIR/application_$TIMESTAMP.tar.gz" | cut -f1)"
# Step 3: Backup Configuration Files
log "Step 3: Backing up configuration files..."
CONFIG_FILES=(
"/etc/nginx/sites-available/"
"/etc/php/"
"/etc/mysql/"
"/etc/fail2ban/"
"/var/spool/cron/crontabs/"
)
for config in "${CONFIG_FILES[@]}"; do
if [ -e "$config" ]; then
config_name=$(basename "$config")
tar -czf "$BACKUP_DIR/config_${config_name}_$TIMESTAMP.tar.gz" "$config" 2>/dev/null || true
fi
done
log "Configuration backup completed"
# Step 4: Create Backup Manifest
log "Step 4: Creating backup manifest..."
cat > "$BACKUP_DIR/manifest_$TIMESTAMP.txt" << EOF
Backup Manifest - $TIMESTAMP
================================
Server: $(hostname)
Date: $(date)
Database: $DB_NAME
Application Directory: $APP_DIR
Files in this backup:
$(ls -lh "$BACKUP_DIR")
System Information:
- OS: $(lsb_release -d 2>/dev/null | cut -f2 || uname -s)
- Kernel: $(uname -r)
- Uptime: $(uptime -p)
Database Info:
- Size: $(du -sh "$BACKUP_DIR/database_$TIMESTAMP.sql.gz" 2>/dev/null | cut -f1 || echo "N/A")
- Tables: $(mysql -h"$DB_HOST" -u"$DB_USER" -p"$DB_PASS" -e "USE $DB_NAME; SHOW TABLES;" 2>/dev/null | wc -l)
Application Info:
- Size: $(du -sh "$BACKUP_DIR/application_$TIMESTAMP.tar.gz" 2>/dev/null | cut -f1 || echo "N/A")
- Laravel Version: $(php artisan --version 2>/dev/null || echo "N/A")
EOF
# Step 5: Transfer to Local Server
log "Step 5: Transferring backups to local server..."
if ! rsync -avz --progress --delete \
-e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" \
"$BACKUP_DIR/" \
"$BACKUP_USER@$LOCAL_SERVER:$REMOTE_PATH/"; then
error_exit "File transfer failed"
fi
# Step 6: Verify Transfer
log "Step 6: Verifying backup integrity..."
REMOTE_MANIFEST="$BACKUP_USER@$LOCAL_SERVER:$REMOTE_PATH/manifest_$TIMESTAMP.txt"
if ! ssh -o StrictHostKeyChecking=no "$BACKUP_USER@$LOCAL_SERVER" "test -f $REMOTE_PATH/manifest_$TIMESTAMP.txt"; then
error_exit "Backup verification failed - manifest not found on remote server"
fi
# Step 7: Cleanup Old Backups
log "Step 7: Cleaning up old backups..."
ssh -o StrictHostKeyChecking=no "$BACKUP_USER@$LOCAL_SERVER" "
find $REMOTE_PATH -name '*.sql.gz' -mtime +30 -delete
find $REMOTE_PATH -name '*.tar.gz' -mtime +30 -delete
find $REMOTE_PATH -name 'manifest_*.txt' -mtime +90 -delete
find $REMOTE_PATH -name '*.log' -mtime +7 -delete
"
# Step 8: Send Notification (optional)
log "Step 8: Backup process completed successfully"
log "Backup location: $LOCAL_SERVER:$REMOTE_PATH"
log "Total backup size: $(du -sh "$BACKUP_DIR" 2>/dev/null | cut -f1 || echo "N/A")"
# Transfer log file
rsync -avz "$LOG_FILE" "$BACKUP_USER@$LOCAL_SERVER:$REMOTE_PATH/logs/"
log "Automated backup completed successfully!"
3.2 Set Script Permissions:
# Make script executable
sudo chmod +x /root/backup.sh
# Test the script (dry run)
sudo -u backup /root/backup.sh
Step 4: Configure Cron Job for Automated Backups
4.1 Setup Cron Schedule:
# Edit crontab
sudo crontab -e
# Add backup schedule (adjust timing based on your needs)
# Run every 6 hours
0 */6 * * * /root/backup.sh >> /var/log/backup/cron.log 2>&1
# Alternative schedules:
# Every 2 hours: 0 */2 * * * /root/backup.sh
# Daily at 2 AM: 0 2 * * * /root/backup.sh
# Weekly on Sunday: 0 2 * * 0 /root/backup.sh
4.2 Create Monitoring for Backup Jobs:
# Create monitoring script
sudo nano /root/monitor_backup.sh
# Add monitoring logic:
#!/bin/bash
# Monitor backup job status
LOG_FILE="/var/log/backup/cron.log"
BACKUP_DIR="/data/backup" # On local server
# Check if last backup was successful
if [ -f "$LOG_FILE" ]; then
LAST_BACKUP=$(grep "completed successfully" "$LOG_FILE" | tail -1)
if [ -n "$LAST_BACKUP" ]; then
echo "Last successful backup: $LAST_BACKUP"
else
echo "WARNING: No successful backup found in log"
exit 1
fi
else
echo "ERROR: Backup log file not found"
exit 1
fi
# Check backup directory size
if ssh backup@local-server "du -sh $BACKUP_DIR" > /dev/null 2>&1; then
REMOTE_SIZE=$(ssh backup@local-server "du -sh $BACKUP_DIR" | cut -f1)
echo "Remote backup size: $REMOTE_SIZE"
else
echo "ERROR: Cannot connect to backup server"
exit 1
fi
echo "Backup monitoring completed"
Phase 2: Prepare New Server
Step 1: Setup Target Server
1.1 Initial Server Setup:
# Update system
sudo apt update && sudo apt upgrade -y
# Install essential packages
sudo apt install -y curl wget git unzip software-properties-common ufw
# Configure firewall
sudo ufw allow ssh
sudo ufw allow 'Nginx Full'
sudo ufw --force enable
1.2 Install Required Software:
# Add PHP repository
sudo add-apt-repository ppa:ondrej/php -y
sudo apt update
# Install Nginx, PHP, MySQL
sudo apt install -y nginx
sudo apt install -y php8.1-fpm php8.1-cli php8.1-mysql php8.1-xml php8.1-curl php8.1-gd php8.1-mbstring php8.1-zip
sudo apt install -y mysql-server
# Secure MySQL
sudo mysql_secure_installation
1.3 Configure Nginx and PHP:
# Create site configuration
sudo nano /etc/nginx/sites-available/klinik-gunung
# Add server configuration (similar to main migration guide)
# ... (Nginx configuration here)
# Enable site
sudo ln -s /etc/nginx/sites-available/klinik-gunung /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl restart nginx
Step 2: Test New Server Setup
2.1 Basic Functionality Tests:
# Test PHP
php -v
echo "<?php phpinfo(); ?>" | sudo tee /var/www/html/test.php
curl http://localhost/test.php && sudo rm /var/www/html/test.php
# Test MySQL
sudo mysql -u root -p -e "SELECT VERSION();"
# Test Nginx
curl -I http://localhost
Phase 3: Execute Migration
Step 1: Final Backup Before Migration
1.1 Run Emergency Backup:
# On old server
/root/backup.sh
# Verify backup completed
tail -f /var/log/backup/cron.log
1.2 Create Migration Lock:
# Create maintenance mode
php artisan down --message="Server migration in progress"
# Create migration flag
touch /tmp/migration_in_progress
Step 2: Transfer Latest Backup to New Server
2.1 Identify Latest Backup:
# On local server, find latest backup
ls -la /data/backup/
LATEST_BACKUP=$(ls -t /data/backup/database_*.sql.gz | head -1)
echo "Latest backup: $LATEST_BACKUP"
2.2 Transfer Backup to New Server:
# From local server to new server
rsync -avz --progress /data/backup/ root@new-server-ip:/tmp/migration_backup/
Step 3: Restore Application on New Server
3.1 Restore Database:
# On new server
# Create database
sudo mysql -u root -p -e "CREATE DATABASE klinik_gunung;"
# Restore from backup
gunzip /tmp/migration_backup/database_*.sql.gz
mysql -u root -p klinik_gunung < /tmp/migration_backup/database_*.sql
3.2 Restore Application Files:
# Extract application
cd /var/www/html
sudo tar -xzf /tmp/migration_backup/application_*.tar.gz
# Install dependencies
composer install --no-dev --optimize-autoloader
npm install && npm run build # If using frontend build
3.3 Restore Configuration:
# Restore configuration files
sudo tar -xzf /tmp/migration_backup/config_*.tar.gz -C /
# Update environment file
cp .env.example .env
nano .env # Update database credentials, app URL, etc.
3.4 Laravel Setup:
# Generate keys and setup
php artisan key:generate
php artisan storage:link
php artisan config:cache
php artisan route:cache
php artisan view:cache
php artisan migrate --force # Run any pending migrations
Step 4: Pre-Launch Testing
4.1 Comprehensive Testing:
# Test application functionality
php artisan tinker --execute="echo 'Laravel working'; DB::connection()->getPdo();"
# Test critical endpoints
curl -I http://localhost
curl http://localhost/api/health # If you have health check
# Test database
php artisan tinker --execute="User::count(); Patient::count();"
4.2 Load Testing:
# Basic load test
ab -n 100 -c 10 http://localhost/
# Check resource usage
htop
free -h
df -h
Step 5: DNS Cutover and Go-Live
5.1 Update DNS:
# Update DNS A record to point to new server IP
# This should be done through your DNS provider
echo "Update DNS: yourdomain.com -> NEW_SERVER_IP"
5.2 Monitor Transition:
# Monitor logs during transition
tail -f /var/log/nginx/access.log
tail -f /var/log/nginx/error.log
tail -f storage/logs/laravel.log
5.3 Health Monitoring:
# Create health check endpoint
php artisan make:command HealthCheck
# Monitor for 24-48 hours after cutover
watch -n 60 'curl -s http://localhost/api/health | jq .status'
Phase 4: Post-Migration Tasks
Step 1: Continue Automated Backups
1.1 Setup Backup on New Server:
# Copy backup script to new server
scp /root/backup.sh root@new-server-ip:/root/
scp /root/monitor_backup.sh root@new-server-ip:/root/
# Update configuration for new server
ssh root@new-server-ip 'nano /root/backup.sh' # Update LOCAL_SERVER if needed
# Setup cron job on new server
ssh root@new-server-ip 'crontab -e' # Add backup schedule
Step 2: Update Monitoring and Alerts
2.1 Configure Monitoring:
# Install monitoring tools
sudo apt install -y htop iotop ncdu monit
# Setup basic monitoring
cat > /etc/monit/monitrc << EOF
# Monit configuration for Laravel app
check process nginx with pidfile /var/run/nginx.pid
start program = "/usr/sbin/service nginx start"
stop program = "/usr/sbin/service nginx stop"
check process mysql with pidfile /var/run/mysqld/mysqld.pid
start program = "/usr/sbin/service mysql start"
stop program = "/usr/sbin/service mysql stop"
check process php8.1-fpm with pidfile /var/run/php/php8.1-fpm.pid
start program = "/usr/sbin/service php8.1-fpm start"
stop program = "/usr/sbin/service php8.1-fpm stop"
EOF
sudo systemctl restart monit
Step 3: Security Hardening
3.1 Implement Security Measures:
# Update packages
sudo apt update && sudo apt upgrade -y
# Configure fail2ban
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
# Setup automatic security updates
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure unattended-upgrades
Step 4: Performance Optimization
4.1 Database Optimization:
# Analyze and optimize database
mysql -u root -p -e "ANALYZE TABLE users, patients, screenings;"
mysql -u root -p -e "OPTIMIZE TABLE users, patients, screenings;"
# Tune MySQL configuration
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
# Add performance settings based on server specs
4.2 Application Optimization:
# Enable caching
php artisan config:cache
php artisan route:cache
php artisan view:cache
# Setup queue worker if using queues
php artisan queue:work --daemon
Phase 5: Disaster Recovery and Rollback
Step 1: Setup Rollback Procedures
1.1 Create Rollback Script:
# Create rollback script on old server
cat > /root/rollback.sh << 'EOF'
#!/bin/bash
# Rollback script - restore from local backup
set -e
BACKUP_SERVER="local-server-ip"
BACKUP_USER="backup"
BACKUP_PATH="/data/backup"
echo "Starting rollback procedure..."
# Find latest backup
LATEST_DB=$(ssh $BACKUP_USER@$BACKUP_SERVER "ls -t $BACKUP_PATH/database_*.sql.gz" | head -1)
LATEST_APP=$(ssh $BACKUP_USER@$BACKUP_SERVER "ls -t $BACKUP_PATH/application_*.tar.gz" | head -1)
echo "Restoring from: $LATEST_DB"
# Download latest backup
rsync -avz $BACKUP_USER@$BACKUP_SERVER:$LATEST_DB /tmp/
rsync -avz $BACKUP_USER@$BACKUP_SERVER:$LATEST_APP /tmp/
# Restore database
echo "Restoring database..."
gunzip /tmp/database_*.sql.gz
mysql -u root -p < /tmp/database_*.sql
# Restore application
echo "Restoring application..."
cd /var/www/html
tar -xzf /tmp/application_*.tar.gz
# Clear caches and restart services
php artisan cache:clear
php artisan config:clear
sudo systemctl restart nginx
sudo systemctl restart php8.1-fpm
echo "Rollback completed successfully!"
EOF
chmod +x /root/rollback.sh
Step 2: Test Disaster Recovery
2.1 Regular DR Testing:
# Schedule quarterly DR tests
# 1. Create test environment
# 2. Restore from backup
# 3. Verify functionality
# 4. Document results
Monitoring and Maintenance
Automated Monitoring
1. Backup Health Monitoring:
# Create daily monitoring script
cat > /etc/cron.daily/backup_check << 'EOF'
#!/bin/bash
BACKUP_SERVER="local-server-ip"
BACKUP_USER="backup"
BACKUP_PATH="/data/backup"
LOG_FILE="/var/log/backup/health_check.log"
# Check backup freshness
LATEST_BACKUP=$(ssh $BACKUP_USER@$BACKUP_SERVER "find $BACKUP_PATH -name 'database_*.sql.gz' -mtime -1 | wc -l")
if [ "$LATEST_BACKUP" -eq 0 ]; then
echo "$(date): WARNING - No recent backup found!" >> $LOG_FILE
# Send alert (configure email/slack/etc)
else
echo "$(date): Backup health check passed" >> $LOG_FILE
fi
EOF
chmod +x /etc/cron.daily/backup_check
Performance Monitoring
1. Setup Performance Baselines:
# Monitor key metrics
# - Response times
# - Database query performance
# - Server resource usage
# - Backup completion times
Troubleshooting
Common Issues and Solutions
Issue: Backup Script Fails
# Check script permissions
ls -la /root/backup.sh
# Test manual execution
sudo -u backup /root/backup.sh
# Check SSH connectivity
sudo -u backup ssh backup@local-server "echo 'Connection OK'"
Issue: Database Backup Too Large
# Implement incremental backups
# Use compression
# Exclude unnecessary data
mysqldump --compress --single-transaction
Issue: Transfer Speed Slow
# Use rsync with compression
rsync -avz --compress
# Schedule during off-peak hours
# Consider using different transfer method
Issue: SSH Connection Refused
# Check SSH service
sudo systemctl status ssh
# Verify SSH keys
sudo -u backup ssh -v backup@local-server
# Check firewall rules
sudo ufw status
Best Practices
Backup Strategy
- Multiple Backup Types: Full + Incremental
- Offsite Storage: Local server + Cloud storage
- Retention Policy: 30 days for dailies, 1 year for monthlies
- Encryption: Encrypt sensitive backups
- Testing: Regular restore testing
Security Considerations
- SSH Key Management: Regular key rotation
- Access Control: Least privilege principle
- Encryption: Use encrypted connections
- Monitoring: Log all backup activities
Performance Optimization
- Compression: Use appropriate compression levels
- Scheduling: Run backups during low-usage periods
- Parallel Processing: Backup database and files simultaneously
- Resource Management: Monitor backup impact on production
This comprehensive migration guide with automatic backup ensures maximum safety and minimal downtime during your Laravel application migration. The multi-phase approach provides multiple safety nets and rollback capabilities.
Complete Server Migration Guide
Comprehensive guide for migrating Laravel applications with large databases to new VPS servers, including preparation, execution, verification, and troubleshooting steps.
Gemini AI Integration for Health Screening
Comprehensive documentation of Google Gemini AI integration for automated KTP, Passport analysis, and document processing in the Klinik Gunung Semeru healthcare system