Comment on Western Digital details 14-platter 3.5-inch HAMR HDD designs with 140 TB and beyond
thejml@sh.itjust.works 15 hours agoRebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.
rtxn@lemmy.world 15 hours ago
Realistically, is that a factor for a Microsoft-sized company, though? I’d be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
brygphilomena@lemmy.dbzer0.com 51 minutes ago
I’d imagine they are using ceph or similar.
You have disk level protection for servers. Server level protection for racks. Rack level protection for locations. Location level protection for datacenters. Probably datacenter level protections for geographic regions.
It’s fucking wild when you get to that scale.
enumerator4829@sh.itjust.works 7 hours ago
Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between
Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.
Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:
The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).
This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.
The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.
tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.
thejml@sh.itjust.works 15 hours ago
True, but that’s going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it’s still a non trivial amount of time. Not to mention the impact to normal usage during that time period.
frongt@lemmy.zip 11 hours ago
Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you’d be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there’s no way you’re going to hit that write speed on a single drive. The fastest retail drives don’t do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.