The way to an own clowd - Part 5
The way to an own clowd (part 5) - DIY Secure Backups
For a cloud system, not only smooth and secure accessibility is crucial, but above all the question: What happens in the worst case? For example, if a hard drive fails, you must be certain that your data is not lost—and that you can reliably restore it.
Fortunately, with Bash scripts and automated, scheduled execution, the topic of backups becomes very manageable.
Today, we’ll go step by step through building a modular backup system—using the already installed PostgreSQL database as an example.
Modular Backup System with Bash
Bash scripts have established themselves as all-rounders in the Linux world.
For your own cloud, you can use them, among other things, to implement regular backups.
Since this series covers several services that should be backed up, it makes sense to design the system in a modular way.
Before you get started, make sure you’re working as the system administrator (root
):
sudo su
First, create the main folder for your backups:
mkdir -p /opt/backup
For the actual backups, log files, and plugin scripts, you’ll need separate subfolders:
mkdir -p /opt/backup/{backups,logs,plugins}
The heart of the system will be the run.sh
script in the /opt/backup
directory. Create it as follows:
touch /opt/backup/run.sh
chmod +x /opt/backup/run.sh
nano /opt/backup/run.sh
Then enter the following content into the script:
#!/bin/bash
NOW=$(date '+%Y-%m-%d_%H-%M-%S')
BACKUP_DIR="/opt/backup/backups"
PLUGIN_DIR="/opt/backup/plugins"
LOG_DIR="/opt/backup/logs"
LOG_FILE="$LOG_DIR/backup_$NOW.log"
OUTPUT_DIR="$BACKUP_DIR/$NOW"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S.%3N')] $*" | tee -a "$LOG_FILE"
}
makeDirectory() {
local dir="$1"
mkdir -p "$dir" || { log "[ERROR] Could not create directory $dir"; exit 1; }
}
# optional: environment variables stored in .env file
ENV_FILE="/opt/backup/.env"
if [ -f "$ENV_FILE" ]; then
log "Loading .env file from $ENV_FILE ..."
set -a
source "$ENV_FILE" || { log "[ERROR] Could not read .env file from $ENV_FILE"; exit 1; }
set +a
fi
log "[INFO] Starting backup ..."
log "[INFO] Setting up output folder in $OUTPUT_DIR ..."
makeDirectory "$OUTPUT_DIR"
for plugin in "$PLUGIN_DIR"/*.sh; do
if [ -x "$plugin" ]; then
PLUGIN_NAME="$(basename "$plugin" .sh)"
PLUGIN_OUTPUT_DIR="$OUTPUT_DIR/$PLUGIN_NAME"
log "[INFO] Running $(basename "$plugin") ..."
log "[INFO] Will output all backups of $PLUGIN_NAME into $PLUGIN_OUTPUT_DIR ..."
makeDirectory "$PLUGIN_OUTPUT_DIR"
"$plugin" "$PLUGIN_OUTPUT_DIR" | tee -a "$LOG_FILE"
if [ "${PIPESTATUS[0]}" -ne 0 ]; then
log "[ERROR] Error in plugin $(basename "$plugin")"
fi
else
log "[WARN] Cannot run plugin: $(basename "$plugin")"
fi
done
log "[INFO] Cleaning up old backups in $BACKUP_DIR ..."
find "$BACKUP_DIR/" -mindepth 1 -maxdepth 1 -type d -mtime +14 -exec rm -rf {} \;
log "[INFO] Cleaning up log files in $LOG_DIR ..."
find "$LOG_DIR/" -type f -mtime +14 -exec rm -f {} \;
log "[INFO] Finished backup"
Each time it starts, the script creates a unique timestamp and sets up both a dedicated log file and a fresh backup directory for the current run. Every activity—from start to finish—is logged with a precise timestamp down to milliseconds, so every step and any errors can be traced later.
The script first checks if an optional .env
file exists and loads any environment variables like backup paths or special settings. This way, the behavior is flexible and you don’t need to change the code itself.
Next, the script runs through all backup plugins found in the plugin directory. Each plugin is called and can place its backups right where they belong.
After all plugins have run, the script automatically removes old backup directories and log files older than 14 days. This keeps the system low-maintenance and conserves storage space.
As announced, the first plugin is written to automatically back up the already installed PostgreSQL database.
To do this, create a file called postgres.sh
in the /opt/backup/plugins
directory and insert the following:
PLUGIN_OUTPUT_DIR="$1"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S.%3N')] $*"
}
DBS=$(sudo -u postgres psql -Atc "SELECT datname FROM pg_database WHERE datistemplate = false;")
for DB in $DBS; do
BACKUP_FILE="${PLUGIN_OUTPUT_DIR}/${DB}.sql.gz"
sudo -u postgres pg_dump "$DB" | gzip > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
log "[INFO] Backup $DB was successful"
else
log "[ERROR] Backup $DB failed"
fi
done
This plugin ensures that all PostgreSQL databases on the server are automatically backed up one by one. The directory where backups are stored is passed to the script as an argument by the central run.sh
script (and used in the plugin as PLUGIN_OUTPUT_DIR
).
The script first uses an SQL command to find all relevant databases. Then it creates a backup for each database, which is compressed and saved as a .sql.gz
file in the target directory.
To automate the backup process, it’s best to set up a cron job that runs the script every night at 3 a.m.
Open the crontab for the current user (root
):
crontab -e
Add the following line at the end of the file:
0 3 * * * /opt/backup/run.sh
Save the file and close the editor.
As a precaution, you should restart the service once:
service cron restart
Conclusion
With the modular backup system, you’ve laid an important foundation for your own cloud.
Thanks to well-designed scripts, clear structures, and automation, you reliably protect your data from loss—without having to rely on big cloud providers or opaque black-box solutions.
The system grows with your requirements: For every new service, you just need to add another plugin, and you’re ready for the future.
Quick & Dirty
# Become root
sudo su
# Create directories
mkdir -p /opt/backup/{backups,logs,plugins}
# 3. Create main script
cat <<'EOF' > /opt/backup/run.sh
#!/bin/bash
NOW=$(date '+%Y-%m-%d_%H-%M-%S')
BACKUP_DIR="/opt/backup/backups"
PLUGIN_DIR="/opt/backup/plugins"
LOG_DIR="/opt/backup/logs"
LOG_FILE="$LOG_DIR/backup_$NOW.log"
OUTPUT_DIR="$BACKUP_DIR/$NOW"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S.%3N')] $*" | tee -a "$LOG_FILE"
}
makeDirectory() {
local dir="$1"
mkdir -p "$dir" || { log "[ERROR] Could not create directory $dir"; exit 1; }
}
# Optional: load environment variables from .env file
ENV_FILE="/opt/backup/.env"
if [ -f "$ENV_FILE" ]; then
log "Loading .env file from $ENV_FILE ..."
set -a
source "$ENV_FILE" || { log "[ERROR] Could not read .env file from $ENV_FILE"; exit 1; }
set +a
fi
log "[INFO] Starting backup ..."
log "[INFO] Setting up output folder in $OUTPUT_DIR ..."
makeDirectory "$OUTPUT_DIR"
for plugin in "$PLUGIN_DIR"/*.sh; do
if [ -x "$plugin" ]; then
PLUGIN_NAME="$(basename "$plugin" .sh)"
PLUGIN_OUTPUT_DIR="$OUTPUT_DIR/$PLUGIN_NAME"
log "[INFO] Running $(basename "$plugin") ..."
log "[INFO] Will output all backups of $PLUGIN_NAME into $PLUGIN_OUTPUT_DIR ..."
makeDirectory "$PLUGIN_OUTPUT_DIR"
"$plugin" "$PLUGIN_OUTPUT_DIR" | tee -a "$LOG_FILE"
if [ "${PIPESTATUS[0]}" -ne 0 ]; then
log "[ERROR] Error in plugin $(basename "$plugin")"
fi
else
log "[WARN] Cannot run plugin: $(basename "$plugin")"
fi
done
log "[INFO] Cleaning up old backups in $BACKUP_DIR ..."
find "$BACKUP_DIR/" -mindepth 1 -maxdepth 1 -type d -mtime +14 -exec rm -rf {} \;
log "[INFO] Cleaning up log files in $LOG_DIR ..."
find "$LOG_DIR/" -type f -mtime +14 -exec rm -f {} \;
log "[INFO] Finished backup"
EOF
# Make main script executable
chmod +x /opt/backup/run.sh
# Create Postgres plugin
cat <<'EOF' > /opt/backup/plugins/postgres.sh
#!/bin/bash
PLUGIN_OUTPUT_DIR="$1"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S.%3N')] $*"
}
DBS=$(sudo -u postgres psql -Atc "SELECT datname FROM pg_database WHERE datistemplate = false;")
for DB in $DBS; do
BACKUP_FILE="${PLUGIN_OUTPUT_DIR}/${DB}.sql.gz"
sudo -u postgres pg_dump "$DB" | gzip > "$BACKUP_FILE"
if [ $? -eq 0 ]; then
log "[INFO] Backup $DB was successful"
else
log "[ERROR] Backup $DB failed"
fi
done
EOF
# Make plugin executable
chmod +x /opt/backup/plugins/postgres.sh
# Create a cronjob to run backups every day at 3:00 AM
(crontab -l 2>/dev/null; echo "0 3 * * * /opt/backup/run.sh") | crontab -
# (Optional) Restart cron service
service cron restart