It doesn’t really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.
SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.
irmadlad@lemmy.world 13 hours ago
I was thinking the same. I would hate to toast a 140 TB drive. I think I’d just sit right down and cry. I’ll stick with my 10 TB drives.
rtxn@lemmy.world 13 hours ago
This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them mirrored with hot swaps.
MonkeMischief@lemmy.today 5 hours ago
This is for like, Smaug but if he hoarded classic anime and the entirety of Steam or something. Lol
thejml@sh.itjust.works 12 hours ago
Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.
rtxn@lemmy.world 12 hours ago
Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
gravitas_deficiency@sh.itjust.works 11 hours ago
Yeah I’m running 16s and that’s pushing it imo