The main issue I see is that the gulf between capacity and transfer speed is now so vast with mechanical drives that restoring the array after drive failure and replacement is unreasonably long. I feel like you’d need at least two parity drives, not just one, because letting the array be in a degraded state for multiple days while waiting for the data to finish copying back over would be an unacceptable risk.
Comment on This new 40TB hard drive from Seagate is just the beginning—50TB is coming fast!
thisbenzingring@lemmy.sdf.org 4 days ago
I deal with large data chunks and 40TB drives are an interesting idea… until you consider one failing
raids and arrays for these large data sets still makes more sense then all the eggs in smaller baskets
grue@lemmy.world 4 days ago
Cenzorrll@lemmy.world 4 days ago
I upgraded my 7 year old 4tb drives with 14tb drives (both setups raid1). A week later, one of the 14tb drives failed. It was a tense time waiting for a new drive and the 24 hours or so for resilvering. No issues since, but boy was that an experience. I’ve since added some automated backup processes.
BakedCatboy@lemmy.ml 4 days ago
Yes this and also scrubs and smart tests. I have 6 14TB spinning drives and a long smart test takes roughly a week, so running 2 at a time takes close to a month to do all 6 and then it all starts over again, so for half to 75% of the time, 2 of my drives are doing smart tests. Then there’s scrubs which I do monthly. I would consider larger drives if it didn’t mean that my smart/scrub schedule would take more than a month. Rebuilds aren’t too bad, and I have double redundancy for extra peace of mind but I also wouldn’t want that taking much longer either
floofloof@lemmy.ca 4 days ago
I guess the idea is you’d still do that, but have more data in each array. It does raise the risk of losing a lot of data, but that can be mitigated by sensible RAID design and backups. And then you save power for the same amount of storage.
Jimmycakes@lemmy.world 4 days ago
These are literally only sold by the rack to data centers.
What are you going on about?
remon@ani.social 4 days ago
You’d still put the 40TB drives in a raid? It certainly will save you NAS bays.
givesomefucks@lemmy.world 4 days ago
They’re also ignoring how many times this conversation has been had…
We never stopped raid at any other increase in drive density, there’s no reason to pick this as the time to stop.
catloaf@lemm.ee 4 days ago
Of course, because you don’t want to lose the data if one of the drives dies. And backing up that much data is painful.
acosmichippo@lemmy.world 4 days ago
depends on a lot of factors. If you only need ~30TB of storage and two spare RAID disks, 3x 40TB disks will be much more costly than 6x 10TB disks.