Comment on Filesystem and virtualization decisions for homeserver build
InvertedParallax@lemm.ee 6 days ago
ZFS, hands down, it doesn’t even begin to hurt the SSDs, it’s basically the best choice, just try to not fill the whole volumes or it starts thrashing like crazy.
ZFS has encryption, but LUKS is fine too.
I’ve run Raidz2 for well over a decade, never had data loss that wasn’t extremely my fault, and I recovered from that almost immediately from backed up snapshot.
thecoffeehobbit@sopuli.xyz 6 days ago
Thanks! Can I ask what is your setup like? ZFS on bare metal? Do you have VMs?
InvertedParallax@lemm.ee 6 days ago
Zfs on Debian on bare metal with nfs server.
Vlan for services with routed subnet
Sriov connectx4 with 1 primary vm running freebsd and basically all my major services in their own jails. Won’t go into details, but it has like 20 jails and runs almost everything.
1 vm for external nginx and named on Debian vm on isolated subnet/Vlan and dmz for exposed services
1 vm for mailinabox on dmz subnet/Vlan
1 Debian vm on services vlan/net for apps that don’t play well with freebsd, mostly dockers, I do not like this vm, it’s basically unclean and mostly isolated.
Few other vms for stuff.
It’s a Dell r730 with 2 2697(or 2698? 20c/40t each) with 512gb.
12x16tb hgst h530s with 2 nvme drives and 2 Sata ssds, somewhere in there is a zlog and l2arc.
Can’t figure out how to fit a decent GPU in there so currently it’s living on my dual Rome workstation, this system is due for an upgrade, thinking about swapping the workstation to a much lighter one and push the work to the server, while moving the storage to a dedicated system, but not there yet.
Love freebsd though, don’t use it as my daily driver, tried a bit, it worked but there was just enough trouble to not make it work, but freebsd has moved on and so have i, so it’s worth a shot again.
Decent i/O, but nothing to write home about, think it saturates the 10g but only just, I have gear for full 100g (I do a LOT of chip startups, and worked at a major networking chip firm a while) but it takes a lot more power, and i have PGE so I can’t justify it till I can seriously saturate it.
Also I’m in process of moving to Europe, built a weak network here and linked via wire guard, but shit is expensive here and I’m not sure how to finish the move just yet, so I’m basically 50/50 including time at work in the valley.
thecoffeehobbit@lemmy.world 5 days ago
Nice. Thanks a lot! Similar in architecture to what I had in mind, so I’m inspired :)
A couple more clarifications, if you will! I’m asking dumb questions as that is the way I learn :D
I just found out about virtiofs, and I’m piecing it together now. I haven’t done actual self hosting for long, so the conventions are a bit blurry, I’m basically piecing it together by what others seem to be doing and trying to understand why. I ended up realising I needed a much higher level discussion around this than “which fs should I use”. If you know of any resources that do NOT talk about specific technologies, but rather, principles behind them, I’d gladly bookmark!
So the changes I’m planning to my setup…
InvertedParallax@lemm.ee 5 days ago
Nfs, it’s good enough, and is how everyone accesses it. I’m toying with ceph or some kind of object storage, but that’s a big leap and I’m not comfortable yet
Zfs snapshot to another machine with much less horsepower but similar storage array.
Debian boots off like a 128gb Sata ssd or something, just something mindless that makes it more stable, I don’t want to f with Zfs root.
My pool isn’t encrypted, don’t consider it necessary, though I’ve toyed with it in th past. Anything sensitive I keep on separate USB keys and duplicate them, and I use luks.
I considered virtiofs, it’s not ready for what I need, it’s not meant for this use case and it causes both security and other issues. Mostly it breaks the demarcation so I can’t migrate or retarget to a different storage server cleanly.
These are good ideas, and would work. I use zvols for most of this, in fact I think I pass through a nvme drive to freebsd for its jails.
Docker fucks me here, the volume system is horrible. I made an lxc based system with python automation to bypass this, but it doesn’t help when everyone releases as docker.
I have a simple boot drive for one reason: I want nothing to go wrong with booting, ever, everything after that is negotiable, but the machine absolutely has to show up.
It has a decent uos, but as I mentioned earlier, I live in San Jose and have fucking pge , so weeks without power aren’t fucking unheard of.