You end up wasting a ton of space though because each vdev has its own parity drives.
Comment on How dumb would it be to run single-node ceph?
avidamoeba@lemmy.ca 8 months agoAdding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors.
Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested.
MangoPenguin@lemmy.blahaj.zone 8 months ago
avidamoeba@lemmy.ca 8 months ago
What is you lose in space, you gain in redundancy.
SidewaysHighways@lemmy.world 8 months ago
Yep I feel this way.
No point in pricing a single HDD because I’m shooting for parity on every vdev I spin up.
MangoPenguin@lemmy.blahaj.zone 8 months ago
Fair enough, it does add a good chunk of power usage though as HDDs are pretty power heavy at 5-7W or so.
bastion@feddit.nl 8 months ago
No matter what setup you use, if you want redundancy, it’ll cost space. In a perfect world, 30% waste would allow you to lose up to 30% of your disk space and still be OK.
…but that extra percentage of used space is the intrinsic cost.
jkrtn@lemmy.ml 8 months ago
“As easy as buying four same-sized disks all at once” is kinda missing the point.
How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?
If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?
avidamoeba@lemmy.ca 8 months ago
You don’t migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.
This is a problem. You don’t know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there’s space. As far as I know you can’t remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you’d have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.