Doesn’t rsync do incremental? I keep hearing about borg but not sure I want to commit to learning a new app
Comment on Where to start with backups?
fizzle@quokk.au 2 days ago
My docker files, configs, and volumes are all kept in a structure like:
/srv
- /docker
- - /syncthing
- - - /compose.yml
- - - /sync-volume
- - /traefik
- - - /compose.yml
[...]
I just backup /srv/docker, but I black list some subfolders for things like databases for which regular dumps are created or something. Currently the compressed / deduplicated repos consume ~350GB.
I use borgmatic because you do 1 full backup and thereafter everything is incremental, so minimal bandwidth.
I keep one backup repo on the server itself in /srv/backup - yes this will be prone to failure of that server but it’s super handy to be able to restore from a local repo if you just mess up a configuration or version upgrade or something.
I keep two other backup repos in two other physical locations, and one repo air gapped.
For example I rent a server from OVH in a Sydney data centre, there’s one repo in /srv/backup on that server, one on OVH’s storage service, one kept on my home server, and one on a removable drive I update periodically.
All repo’s are encrypted except for the air gapped one. That one has instructions intended for someone to use if I die or am incapacitated. So it has my master password for my password database, ssh keys, everything. We have a physical safe at home so that’s where that lives.
wabasso@lemmy.ca 2 days ago
fizzle@quokk.au 2 days ago
Yes but rsync isn’t a “backup”.
Spouse i inadvertently deleted a heap of stuff last month. Rsync would happily reflect that change on the remote. Borg will store the change but you can still restore from an earlier point in time.
wabasso@lemmy.ca 1 day ago
Right makes sense. I’ve been using rdiff-backup for that. I should compare how the two perform. Do you get the impression borg is good at getting the diffs of a lot of different file types?
fizzle@quokk.au 20 hours ago
Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository.A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256.
To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file.
Compared to other deduplication approaches, this method does NOT depend on:
- file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo.- complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks.- The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm.This is what their docs say. Not sure what you mean about diffferent file types but this seems fairly agnostic?
I actually didn’t realise that first point, as in you can move folders and the chunks will still be deduplicated.
mapleseedfall@lemmy.world 2 days ago
Do you recommend moving an existing volume to this new structure?
fizzle@quokk.au 2 days ago
A docker volume?
I only use bind mounts, and in that case you can put them where you like and move them while theyre not mounted by a running container.
Docker volume locations are managed by docker, and i dont use those so not part of the above plan.
irmadlad@lemmy.world 2 days ago
Over the years, I have gravitated to keeping docker compose, configs, et al, in structured directories in lieu of docker just splattering the HDD willie-nilly, with configs anywhere and everywhere. It sure makes problem solving much easier when you can go directly to where each component is instead of spending 30 minutes trying to
locatewhere docker put everything.