Comment on Reverse Proxy: a single point of failure in my lab

Decipher0771@lemmy.ca ⁨1⁩ ⁨week⁩ ago

You’re talking high availability design. As someone else said, there’s almost always a single point of failure but there are ways to mitigate depending on the failures you want to protect against and how much tolerance you have for recovery time. instant/transparent recovery IS possible, you just have to think through your failure and recovery tree.

proxy failures are kinda the simplest to handle if you’re assuming all the backends for storage/compute/network connectivity is out of scope. You set up two (or more) separate VMs that have the same configuration and float a virtual IP between them that your port forwards connect to. If any VM goes down, the VIP migrates to whatever VM is still up and your clients never know the difference. Look up Keepalived, that’s the standard way to do it on Linux.

But you then start down a rabbit hole. Is your storage redundant, the networking connectivity redundant, power? All of those can be made redundant too, but it will cost you, time and likely money for hardware. It’s all doable, you just have to decide how much it’s worth for you.

Most home labbers I suspect will just accept the 5mins it takes to reboot a VM and call it a day. Short downtime is easier handle, but there are definitely ways to make your home setup fully redundant and highly available. At least unless a meteor hits your house anyway.

source
Sort:hotnewtop