Comment on Windows users keep losing files to OneDrive, and many don't know why
b_tr3e@feddit.org 11 hours ago…and don’t forget re-testing your backups regularily. I had a really good backup strategy on my Loonix machine. Automatic (or it won’t be done), tested, fool proof. When I somehow crashed a somewhat complex encrypted LVM array while swapping HDDs against SSDs, I had to recover from backup. Unfortunately I had become a better fool than I was when I set up backup4l. I had changed the compression algo, made a tiny mistate in the config and failed to realize that for six month I had been storing empty backups every day. Outch.
suicidaleggroll@lemmy.world 9 hours ago
Notifications will go a long way toward helping with that. Check all assumptions, check all exit codes, notify and stop if anything is amiss. I also have my backup script notify on success, with the time it took to back up and the size and delta size (versus the previous backup) of the resulting backup. 99% of errors get caught by the checks and I get a failure notification. But just in case something silently goes wrong, the size of the backup (too big or too small) is another obvious indicator that something went wrong.
b_tr3e@feddit.org 8 hours ago
I know. I just had become lazy enough to take the daily notification’s subject line (“backup4l has run successfully”) as evidence that everything was OK. If I had looked inside the bloody mail I’d have notices that the backup’s size was 0B all the time because my self-rolled XZ-compression script failed to add data to the archives it “successfully” created. That’s what I meant with “re-testing” - I should better have written “re-validate”. My unforgiveable fault was not to look directly at the generated archives after changing the compression from bz2 to xz. Which was pretty pointless anyway as it turned out.