Comment on Reverse Proxy: a single point of failure in my lab
Decipher0771@lemmy.ca 1 week ago
You’re talking high availability design. As someone else said, there’s almost always a single point of failure but there are ways to mitigate depending on the failures you want to protect against and how much tolerance you have for recovery time. instant/transparent recovery IS possible, you just have to think through your failure and recovery tree.
proxy failures are kinda the simplest to handle if you’re assuming all the backends for storage/compute/network connectivity is out of scope. You set up two (or more) separate VMs that have the same configuration and float a virtual IP between them that your port forwards connect to. If any VM goes down, the VIP migrates to whatever VM is still up and your clients never know the difference. Look up Keepalived, that’s the standard way to do it on Linux.
But you then start down a rabbit hole. Is your storage redundant, the networking connectivity redundant, power? All of those can be made redundant too, but it will cost you, time and likely money for hardware. It’s all doable, you just have to decide how much it’s worth for you.
Most home labbers I suspect will just accept the 5mins it takes to reboot a VM and call it a day. Short downtime is easier handle, but there are definitely ways to make your home setup fully redundant and highly available. At least unless a meteor hits your house anyway.
thisisnotausername@lemmy.dbzer0.com 1 week ago
The more I go into this rabbit hole, the more I understand this, and I understand now that I went into the hole with practically 0 knowledge of this topic. It was so frustrating to get my “HA” proxy on LAN with replicated containers, DNS and shared storage, hours sank into getting permission to work, just to realise “oh god, this only works on LAN” when my certs failed to renew.
I do not think I need this, truth is that the lab is in a state where I have most things I want[need] working very well and this is a fun nice to have to learn some new things.
Dempf@lemmy.zip 1 week ago
IIRC there’s a couple different ways with Caddy to replicate the letsencrypt config between instances, but I never quite got that working. I didn’t find a ton of value in a HA reverse proxy config anyways since almost all of my services are running on the same machine, and usually the proxy is offline because that machine is offline. The more important thing was HA DNS, and I got that working pretty well with keepalived. The redundant DNS server just runs on a $100 mini PC. Works well enough for me.