Yes, of course 321 all the way! I was thinking 3 copies: two in RAID, one in B2 or Hetzner. Also I could keep daily ZFS snapshots for 1 month in case I mess up something. I know, it’s a bit of a waste but for me it’s ok since what I waste is definitely cheaper than buying a 4 bay or making a custom build. An alternative would be using two different clouds for backups and using both HDDs for space.
Nice idea, a used pc, but I’m concerned that something could fail in <4 years if the machine is too old, used office pc those are tipically from 2015-2018, so they’re already 7 years old in the best case.
Thanks for the benchmark! I think 16 GB will be more enough for me then, I don’t plan on using jellyfin, only transcoding would be immich but again it’s ok if these task are slow.
Comment on Hardware raccomandation for new selfhoster
pleksi@sopuli.xyz 2 days ago
Two 4tb disks in raid 1 is a waste of money for most selfhosters. Unless you really want to avoid downtime due to disk failure. (and even then you could get a power outage or a network failure). A second disk will protect you from disk failure but not from other forms of data loss (like you fucking up something and erasing all of your family photos).
Do you also plan to buy some cold storage medium and cloud storage or a remote backup server or something (for 3+2+1 backups)? thats way more important.
bordam@feddit.it 1 day ago
Onomatopoeia@lemmy.cafe 1 day ago
“Two in RAID” only means 2 when the arrays on on different systems and the replication isn’t instant. Otherwise it only protects against hardware failures and not against you fucking up (ask me how I know…).
If the arrays are on 2 separate systems in the same place, they’ll protect against independent hardware failures without a common cause (a drive dies, etc), but not against common threats like fire or electrical spikes.
Also, how long does it take to return one of those systems to fully functioning with all the data of the other? This is a risk all of us seem to overlook at times.
pleksi@sopuli.xyz 1 day ago
Im using debian btw and non zfs system, so mileage may of course vary.
bluGill@fedia.io 1 day ago
ZFS snapshots are easy to settup. If you don't notice that you deleted all the snapshots for a month you never will.
you still should have offsite backups for a fire, but the notion that raid isn't backup is not really correct since for most people the situations that raid with snapshots isn't enough protection will never occure and to the risk is acceptable. Plus raid is a lot easier to get right. For that matter if you have a backup but don't have the password after the fire you don't have a backup.
though if you rely on raid alone I'd want 3 disk redundancy.
Onomatopoeia@lemmy.cafe 1 day ago
One drive failure means an array is degraded until resilvering finishes.
Resilvering is an intensive process that can push other drives to fail.
I have a ZFS system that takes the better part if a day (24 hours) to resilver a 4TB drive in an 8TB five-drive array (single parity) that’s about 70% full. When uts resilvering I have to be confident my other data stores don’t fail (I have the data locally on 2 other drives and a cloud backup).