Comment on Filesystem and virtualization decisions for homeserver build
thecoffeehobbit@lemmy.world 4 weeks agoNice. Thanks a lot! Similar in architecture to what I had in mind, so I’m inspired :)
A couple more clarifications, if you will! I’m asking dumb questions as that is the way I learn :D
- If your VMs need to access the data, do you then connect it via the nfs share?
- I suppose you have separate backup schemes for the data vs. the VMs?
- Does your bare metal Debian OS indeed run on the zfs pool too or does it have a separate boot disk? If on the pool, what’s that setup like? Is there a LUKS encrypted keystore partition to use with grub, or do you use the zfs boot menu? (I assume your pool is encrypted) -I’m trying to gauge how difficult this install is going to be if I want the OS on the zfs pool…
I just found out about virtiofs, and I’m piecing it together now. I haven’t done actual self hosting for long, so the conventions are a bit blurry, I’m basically piecing it together by what others seem to be doing and trying to understand why. I ended up realising I needed a much higher level discussion around this than “which fs should I use”. If you know of any resources that do NOT talk about specific technologies, but rather, principles behind them, I’d gladly bookmark!
So the changes I’m planning to my setup…
- encrypt the 2x960GB zfs pool and share it with [samba|virtiofs|nfs] from the host OS (checking later which one is the way to go)
- migrate all meaningful data (like application dbs) to reside on the pool rather than on the VM images and keep this separation of data&application layers to enable different backup schemes for them
- later / if I have the energy: try installing the host OS on the pool as well to get rid of the small SSD and make space for the HDDs.
InvertedParallax@lemm.ee 4 weeks ago
Nfs, it’s good enough, and is how everyone accesses it. I’m toying with ceph or some kind of object storage, but that’s a big leap and I’m not comfortable yet
Zfs snapshot to another machine with much less horsepower but similar storage array.
Debian boots off like a 128gb Sata ssd or something, just something mindless that makes it more stable, I don’t want to f with Zfs root.
My pool isn’t encrypted, don’t consider it necessary, though I’ve toyed with it in th past. Anything sensitive I keep on separate USB keys and duplicate them, and I use luks.
I considered virtiofs, it’s not ready for what I need, it’s not meant for this use case and it causes both security and other issues. Mostly it breaks the demarcation so I can’t migrate or retarget to a different storage server cleanly.
These are good ideas, and would work. I use zvols for most of this, in fact I think I pass through a nvme drive to freebsd for its jails.
Docker fucks me here, the volume system is horrible. I made an lxc based system with python automation to bypass this, but it doesn’t help when everyone releases as docker.
I have a simple boot drive for one reason: I want nothing to go wrong with booting, ever, everything after that is negotiable, but the machine absolutely has to show up.
It has a decent uos, but as I mentioned earlier, I live in San Jose and have fucking pge , so weeks without power aren’t fucking unheard of.
thecoffeehobbit@sopuli.xyz 3 weeks ago
Aight thank you so much, confirms I’m on the right path! This clarifies a lot, I’ll keep the ext4 boot drive :)
InvertedParallax@lemm.ee 3 weeks ago
FYI, zfs is pretty fucking fragile, it breaks a lot, especially if you like to keep your kernel up to date. The kernel abi is just unstable and it takes months to catch up.
Which is part of why I don’t trust zfs on root.
Worst case you can sometimes recover with zfs-fuse.
thecoffeehobbit@sopuli.xyz 3 weeks ago
Right, thanks for the heads up! On the desktops I have simply installed zfs as root via the Ubuntu 24.04 installer. Then, as the option was not available in the server variant I started to think maybe that is not something that should be done :p