Comment on Raid Z2 help
greyfox@lemmy.world 3 days agoWell I am not claiming to be a ZFS expert, I have been using it since 2008ish both personally and professionally.
Both lz4 and zstd have almost no performance impact on modern hardware. So “almost” is zero in your mind? Why waste the CPU cycles on compressing data that is already compressed? I recognize that you might not care, but I sure do. And I wouldn’t say it would be wrong to think that way.
compression acts on blocks in ZFS, therefore it is enabled at the pool level
This is incorrect. You can zfs set compression=lz4 dataset (or off) on a per dataset basis. You can see your compression efficiency per dataset by running zfs get compressratio dataset, if your blocks were written to a dataset with compress=off you will see no compression for that dataset. You can absolutely mix compressed and uncompressed datasets in the same pool.
OP added a -O option to set compression when he created the pool but that is not a pool level setting. If you look at the documentation for zpool-create you will see that -O are just properties passed to the root dataset verses -o options which are actual pool level parameters.
You might be confusing compression and deduplication. Deduplication is more pool wide.
ZFS does indeed need to allocate some space at the front and end of a pool for slop, metaslab, and metadata. I think you are confusing filesystem and datasets.
Well yes and no here. You are right I should have been calling them datasets. Datasets are a generic term, and there are different dataset types, like file systems, volumes and snapshots.
So yeah I maybe should have been more generic and called them datasets but unless OP is using block volumes we are probably talking about ZFS file systems here. Go to say the zfsprops man page and you will see file system mentioned about 60 times when discussing properties that can be set for file type datasets.
I’m not sure what you’re trying to say about NFS and ZFS, here but this is completely false, even if you mean datasets.
It sounds like you are unaware of the native NFS/SMB integrations that ZFS has.
It is totally optional but instead of using your normal /etc/exports to set NFS settings ZFS can dynamically load export settings when your dataset is mounted.
This is done with the sharenfs parameter zfs set sharenfs=<export options> dataset. Doing this means you keep your export settings with your pool instead of the system it is mounted on, that way if you say replicate your pool to another system those export settings automatically come with it.
There are also sharesmb options for samba.
My point was then that you should lay out your dataset hierarchy based on your expected permissions for NFS/SMB. You could handle these exports manually yourself and not have to worry about it.
My comment was less about compression and more about saying that you should consider splitting your datasets based on what is in them because the more separate they are the more control you have. You gain a lot of control, and lose very little since it all comes from the same pool.
Some of the reasons I have that options is less important than it was a decade ago. i.e. doing a 20tb ZFS send before resuming a send was possible sucked. Any little problem and you have to start over. Having more filesystems meant more smaller sends. And yeah I was using ZFS before lz4 was even an option and CPU was more precious back then, but I don’t see any reason to waste CPU cycles when you can create a separate file system for your media and set compression to off on it.
non_burglar@lemmy.world 2 days ago
For context, I’ve also been using ZFS since Solaris.
I was wrong about compression on datasets vs pools, my apologies.
By “almost no impact” (for compression), I meant well under 1% penalty for zstd, and almost unmeasurable for lz4 fast, with compression efficiency being roughly the same for both lz4 and zstd. Here is some data on that.
Lz4 compression on modern (post-haswell) CPUs is actually so fast, that lz4 can beat non-compressed writes in some workloads (see this). And that is from 2015.
Today, there is no reason to turn off compression.
I will definitely look into the NFS integrations for ZFS, I use NFS (exports and mounts) extensively, I wonder what I’ve been missing.
Anyway, thanks for this.