Disaster Backup Guide
To be honest, these days, with the constant sound of explosions echoing from all corners of the country, I don’t feel well—and writing a new blog post is the last thing on my mind.
But given the terrible situation we’re in and the urgent need to create backups of data stored on servers—or even keep them locally—I decided to publish this post. I hope it can be a small help in these circumstances.
In this post, I’ll cover the following topics. Let me know if you need anything else so I can add it:
- PostgreSQL
- MySQL
- MongoDB
- Elasticsearch
- File transfer using
rsync
These are just examples for each case, and I’ve tried to avoid unnecessary explanations. You certainly already know, for instance, never hard-code sensitive info like usernames and passwords.
PostgreSQL Backup using pg_dump
#!/bin/bash
# PostgreSQL backup script
DB_HOST="localhost"
DB_USER="your_username"
DB_PASS="your_password"
DB_NAME="your_database"
BACKUP_DIR="/backup/postgres"
mkdir -p "$BACKUP_DIR"
export PGPASSWORD="$DB_PASS"
# Single database backup
pg_dump -h "$DB_HOST" -U "$DB_USER" -F c -b -v -f "$BACKUP_DIR/${DB_NAME}_$(date +%Y%m%d).dump" "$DB_NAME"
# All databases backup (alternative)
pg_dumpall -h "$DB_HOST" -U "$DB_USER" | gzip > "$BACKUP_DIR/all_databases_$(date +%Y%m%d).sql.gz"
unset PGPASSWORD
echo "PostgreSQL backup completed successfully."
MySQL Backup using mysqldump
#!/bin/bash
# MySQL backup script
DB_HOST="localhost"
DB_USER="your_username"
DB_PASS="your_password"
DB_NAME="your_database"
BACKUP_DIR="/backup/mysql"
mkdir -p "$BACKUP_DIR"
# Single database backup
mysqldump -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" --single-transaction --routines --triggers "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_$(date +%Y%m%d).sql.gz"
# All databases backup (alternative)
mysqldump -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" --all-databases --single-transaction --routines --triggers | gzip > "$BACKUP_DIR/all_databases_$(date +%Y%m%d).sql.gz"
echo "MySQL backup completed successfully."
MongoDB Backup
#!/bin/bash
# MongoDB backup script
DB_HOST="localhost"
DB_USER="your_username"
DB_PASS="your_password"
DB_NAME="your_database"
BACKUP_DIR="/backup/mongodb"
mkdir -p "$BACKUP_DIR"
# Canonical (JSON) backup
mongodump --host "$DB_HOST" --username "$DB_USER" --password "$DB_PASS" --db "$DB_NAME" --out "$BACKUP_DIR/mongodb_$(date +%Y%m%d)" --jsonFormat canonical
echo "MongoDB backup completed successfully."
Elasticsearch Backup
#!/bin/bash
# Config
ES_HOST="localhost"
ES_USER="your_username"
ES_PASS="your_password"
BACKUP_DIR="/backup/elasticsearch"
SCROLL_TIMEOUT="1m"
BATCH_SIZE=1000
# Get list of all indices
INDICES=$(curl -s -u "$ES_USER:$ES_PASS" "$ES_HOST:9200/_cat/indices?h=index")
mkdir -p "$BACKUP_DIR"
for INDEX in $INDICES; do
echo "Backing up $INDEX..."
INDEX_BACKUP_FILE="$BACKUP_DIR/${INDEX}_data.json"
> "$INDEX_BACKUP_FILE" # empty or create
# Start scroll
RESPONSE=$(curl -s -u "$ES_USER:$ES_PASS" -X POST "$ES_HOST:9200/$INDEX/_search?scroll=$SCROLL_TIMEOUT&size=$BATCH_SIZE" \
-H 'Content-Type: application/json' \
-d '{"query": {"match_all": {}}}')
echo "$RESPONSE" > "$BACKUP_DIR/${INDEX}_scroll_init.json"
SCROLL_ID=$(echo "$RESPONSE" | jq -r '._scroll_id')
HITS=$(echo "$RESPONSE" | jq '.hits.hits')
echo "$HITS" | jq -c '.[]' >> "$INDEX_BACKUP_FILE"
# Continue scrolling until no more hits
while true; do
RESPONSE=$(curl -s -u "$ES_USER:$ES_PASS" -X GET "$ES_HOST:9200/_search/scroll" \
-H 'Content-Type: application/json' \
-d "{\"scroll\": \"$SCROLL_TIMEOUT\", \"scroll_id\": \"$SCROLL_ID\"}")
HITS=$(echo "$RESPONSE" | jq '.hits.hits')
COUNT=$(echo "$HITS" | jq 'length')
if [[ "$COUNT" -eq 0 ]]; then
break
fi
echo "$HITS" | jq -c '.[]' >> "$INDEX_BACKUP_FILE"
SCROLL_ID=$(echo "$RESPONSE" | jq -r '._scroll_id')
done
echo "Finished backing up $INDEX"
done
echo "hElasticsearch backup completed successfully."
For parsing JSON, I’ve used the jq
tool. If you’re using Ubuntu, you can install it with:
sudo apt install jq
Transferring Files with rsync
until rsync -avzP --partial source_dir/ user@ip:destination_dir/; do
echo "Retrying in 60 seconds..."
sleep 60
done