Ironically, if I would have had more services running in docker I might not have experienced such a fundamental outage. Since docker services usually tend to spin up their exclusive database engine you kind of "roll the dice" as far as data corruption goes with each docker service individually. Thing is, I don't really believe in bleeding CPU computation cycles by running redundant database services. And since many of my services are already very long-serving they've been set up from source and all funneled towards a single, central and busy database server - thus, if that one experiences sudden outage (for instance power failure) all kinds of corruption and despair can arise. ;-)
Guess I should really look into a small UPS and automated shutdown. On top of better backup management of course! Always the backups.
Why so much? A simple daily timer that runs mysqldump and a backup of that would be enough for most people. Using a solid OS (Debian) and a filesystem such as BTRFS, ZFS or XFS will also save you from power loss related corruption. Why do people go SO overkill with everything?
at least weekly mysqlcheck + mysqlddump and some form of periodic off-machine storing of that is something I'll surely take to heart after this lil' fiasco ;-) sound advice, thank you!
Personally I’d go for as big a UPS as I could afford, but I serve some public-facing stuff from my homelab and I live in an area with outdated infrastructure and occasional ice storms. I currently have a small UPS and have been too tired/overwhelmed to set up automated shutdown yet. It’s not too hard though, I’ve done it before. And even without that in place, my small UPS has kept things going thru a bunch of <10 minute outages.
pete@social.cyano.at 1 year ago
Ironically, if I would have had more services running in docker I might not have experienced such a fundamental outage. Since docker services usually tend to spin up their exclusive database engine you kind of "roll the dice" as far as data corruption goes with each docker service individually. Thing is, I don't really believe in bleeding CPU computation cycles by running redundant database services. And since many of my services are already very long-serving they've been set up from source and all funneled towards a single, central and busy database server - thus, if that one experiences sudden outage (for instance power failure) all kinds of corruption and despair can arise. ;-)
Guess I should really look into a small UPS and automated shutdown. On top of better backup management of course! Always the backups.
TCB13@lemmy.world 1 year ago
Why so much? A simple daily timer that runs mysqldump and a backup of that would be enough for most people. Using a solid OS (Debian) and a filesystem such as BTRFS, ZFS or XFS will also save you from power loss related corruption. Why do people go SO overkill with everything?
pete@social.cyano.at 1 year ago
ThorrJo@lemmy.sdf.org 1 year ago
Personally I’d go for as big a UPS as I could afford, but I serve some public-facing stuff from my homelab and I live in an area with outdated infrastructure and occasional ice storms. I currently have a small UPS and have been too tired/overwhelmed to set up automated shutdown yet. It’s not too hard though, I’ve done it before. And even without that in place, my small UPS has kept things going thru a bunch of <10 minute outages.
emuspawn@orbiting.observer 1 year ago
And if the power in your area sucks, the power conditioning even a good small UPS provides is invaluable.