Ha, I went down the whole Ceph and Longhorn path as well, then ended up with hostPath and btrfs. Glad I’m not the only one who considers the former options too much of a headache after fully evaluating them.
squinky@sh.itjust.works 3 weeks ago
Just btrfs.
melfie@lemy.lol 2 weeks ago
MrModest@lemmy.world 2 weeks ago
Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and they ended up with unrecoverable data.
unit327@lemmy.zip 2 weeks ago
Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.
Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?
ikidd@lemmy.world 2 weeks ago
Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.
non_burglar@lemmy.world 2 weeks ago
That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.
squinky@sh.itjust.works 2 weeks ago
All I know about ZFS is that there are weird patent or closed source encumbrances or something. I hear it’s good, and it seems popular, I just avoid proprietary Oracle products.
As for btrfs, the only thing that’s claimed to be unstable is raid 5 or 6. And people use it in production saying the claims are overblown. I don’t. I use it in raid1 mode. But raid1 in btrfs doesn’t require a bunch of matching drives. It lets you glom together a number of mismatched disks and just puts every block on more than one of them. So it’s a nice cross between a raid and LFS or JBOD.