Comment on Docker Backup Stratagy

lazynooblet@lazysoci.al ⁨15⁩ ⁨hours⁩ ago

I’m lucky enough to run a business that needs a datacenter presence. So most my home-lab (including Lemmy) is actually hosted on a Dell PowerEdge R740xd in the DC. I can then use the small rack I have at home as off-site backups and some local services.

I treat the entirety of /var/lib/docker as expendable. When creating containers, I make sure any persistent data is mounted from a directory made just to host the persistent data. It means docker compose down --rmi all --volumes isn’t destructive.

When a container needs a database, I make sure to add an extra read-only user. And all databases have their container and persistent volume directory named so scripts can identify them.

The backup strategy is then to backup all non-database persistent directories and dump all SQL databases, including permissions and user accounts. This gets run 4 times a day and the backup target is an NFS share elsewhere.

This is on top of daily backuppc backups of critical folders, automated Proxmox snapshots for docker hosts every 20 minutes, daily VM backups via Proxmox Backup Server and replication to another PBS at home.

I also try and use S3 where possible (seafile and lemmy are the 2 main uses) which is hosted in a container on a Synology RS2423RP+. Synology HyperBackup then performs a backup overnight to the Synology RS822+ I have at home.

Years ago I fucked up, didn’t have backups, and lost all the photos of my sons early years. Backups are super important.

source
Sort:hotnewtop