Comment on Question about backup

fuser@quex.cc ⁨1⁩ ⁨year⁩ ago

for the database, consider a script that does a “mysqldump” of the entire database that you schedule to run on the system daily/weekly. Also consider using gpg to encrypt the plain text file and delete the original in the same script. This is so you don’t leave a copy of the data unencrypted anywhere outside the database. You can then initiate either a copy of the encrypted file to a local folder that you’re backing up, or if you’ve set this up to back up directly on the remote that’s fine too - bringing it local gives you a staged copy outside the archive and not on the original host in case you need an immediately available backup of your database.

With respect to the 3 separate repos, I would say keep them separate unless you have a large amount of duplicated data. Borg does not deduplicate over different repos as far as I’m aware. The downside of using a single repo is that the repo is locked during backups and if you’re running different scripts from each host, the lock files borg creates can become stale if the script doesn’t complete and one day (probably the day you’re trying to restore) you’ll find that borg hasn’t been backing your stuff up because a lock file is holding the backup archive open due to a failed backup that terminated due to an untimely reboot months ago. I don’t recall now why this occurs and doesn’t self-correct but do remember concluding that if deduplication isn’t a major factor, it’s easier and safer to keep the borg repos separate by host. Deduplication is the only reason to combine them as far as I can tell.

When it comes to backup scripts, try to keep everything foolproof and use checks where you can to make sure the script is seeing the expected data, completes successfully and so on. Setting up automatic backups isn’t a trivial task, although maybe tools like rclone and borgmatic simplify it - I haven’t used those, just borg command line and scp/gpg in shell scripts. Have fun!

source
Sort:hotnewtop