You might look at TechHuts previous tutorial on setting this all up from around a year ago where he instead used Cockpit to manage his ZFS pool shares rather than TrueNAS. I followed that one a few months ago with a minor amount of Linux experience and got everything set up on Proxmox quite easily. I do recall some people complaining about having issues with permissions or some such which is why he created this new tutorial, but I didn’t run into those issues for whatever reason.
This new Proxmox build has been rock solid after running everything on flaky laptops, mini PCs, and a Windows-based server build for the past 12+ years and I’ve also used it to now run things like Jellyseer, Immich, Frigate, and more which is awesome, but I did spend a good chunk of money for a lot of new hardware, redundant SSDs, RAM, etc so you may be better off starting with something more basic to tinker and learn with.
Lemming007@lemmy.dbzer0.com 16 hours ago
I will definitely check that out, thanks! Out of curiosity, since I don’t have the hardware to play with yet, do you know if you are able to use different sized drives with ZFS pools? I’ve seen that there have been some updates over the past year that should make expanding a ZFS pool doable now, do you know if that is the case? Thanks again for the insight :)
CmdrShepard42@lemm.ee 11 hours ago
AFAIK no you can’t use different sized drives. I have read about the update to allow you to expand existing pools but it hasn’t made its way to the version of ZFS that Proxmox uses, but I hope it does soon.
Previously, I was using SnapRAID which does allow you to use any size drive provided your parity drives are equal or larger to the rest of the drives in the pool so you may check that out. It worked well for me on Windows.
I would caution that if you plan to build a big library over time, to just bite the bullet and get matching drives to start with because I tried mismatched drives purchased over several years (whatever was a good deal when I needed to expand the pool) and it got to the point where it was becoming unmanageable once I hit about 8 drives as SATA ports became limited and HDD capacities on the market increased (why waste a port on a 6TB drive when you could have a 14TB-20TB drive instead?). With this new server build, I just bought several matching 14TB drives from serverpartdeals.com and had to transfer everything from the old SnapRAID pool to my ZFS pool which took about a week with rsync.
Lemming007@lemmy.dbzer0.com 10 hours ago
Gotcha, thanks! In doing some more research, it looks like you could potential make mirrored vdevs of your smaller disks and then another for your larger drives and add them to the same pool. That may be a workaround to still be able to use ZFS, but I hadn’t heard of SnapRAID so I will definitely check that out, thanks!
You make a good point about just sucking it up and getting all new drives at a much larger capacity though. I am definitely starting to think about that option more closely. Its just obviously expensive as hell when you are also getting a new, much better spec’d, NAS box, haha. Serverpartdeals does have pretty good prices though all things considered so maybe I will bite the bullet and just got for it. Not sure what I’ll end up doing with 4 other perfectly good drives that total 24 TBs though! haha. Although I sure I will find something to do with them, haha. Thanks again for the input and insight. Very much appreciated.