My sample size of myself has had 1 drive fail in decades. It was a solid state drive. Thankfully it failed in a strangely intermittent way and I was able to recover the data. But still, it surprised me as one would assume solid state would be more reliable. That spinning rust has proven to be very reliable. But regardless I’m sure SSD will be/are better in every way.
'Writing is on the wall for spinning rust': IBM joins Pure Storage in claiming disk drives will go the way of the dodo in enterprises
Submitted 1 day ago by Sunshine@lemmy.ca to technology@lemmy.world
Comments
hapablap@lemmy.sdf.org 7 hours ago
MTK@lemmy.world 2 hours ago
I generally agree, it won’t take long for SSDs to be cheap enough to justify the expense. HDD is in a way similar to CD/DVD, it had it’s time, it even lasted much longer than expected, but eventually technology became cheaper and the slightly cheaper price didn’t make sense any more.
SSD wins on all account for live systems, and long term cold storage goes to tapes. Not a lot of reasons to keep them around.
xthexder@l.sw0.com 1 hour ago
As a person hosting my own data storage, tape is completely out of reach. The equipment to read archival tapes would cost more than my entire system. It’s also got extremely high latency compared to spinning disks, which I can still use as live storage.
Unless you’re a huge company, spinning disks will be the way to go for bulk storage for quite a while.
lemmymarud@lemmy.marud.fr 1 hour ago
Well, tape is still relevant for the 3-2-1 backup rule and I worked in a pretty big hosting company where you would get out 400 tb of backup data each weekend. it’s the only media allowing to have a real secured fully offline copy that won’t depend on another online hosting service
dual_sport_dork@lemmy.world 23 hours ago
No shit. All they have to do is finally grow the balls to build SSD’s in the same form factor as the 3.5" drives everyone in enterprise is already using, and stuff those to the gills with flash chips.
“But that will cannibalize our artificially price inflated/capacity restricted M.2 sales if consumers get their hands on them!!!”
Yep, it sure will. I’ll take ten of them, please.
jj4211@lemmy.world 7 hours ago
Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.
Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,
The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.
Hozerkiller@lemmy.ca 5 hours ago
I hope youre not putting m.2 drives in a server if you plan on reading the data from them at some point. Those are for consumers and there’s an entirely different formfactor for enterprise storage using nvme drives.
jj4211@lemmy.world 4 hours ago
Enterprise systems do have m.2, though admittedly its only really used as pretty disposable not volumes.
Though they aren’t used as data volumes so much, it’s not due to unreliability, it’s due to hot swap and power levels.
doodledup@lemmy.world 11 hours ago
You apparently have no clue what you’re talking about.
NeoNachtwaechter@lemmy.world 17 hours ago
Haven’t they said that about magnetic tape as well?
Some 30 years ago?
Isn’t magnetic tape still around? Isn’t even IBM one of the major vendors?
n2burns@lemmy.ca 14 hours ago
Anyone who has said that doesn’t know what they’re talking about. Magnetic tape is unparalleled for long-term/archival storage.
This is completely different. For active storage, solid-state has been much better than spinning rust for a long time, it’s just been drastically more expensive. What’s being argued here is that it’s not performant and while it might be more expensive initially, it’s less expensive to run and maintain.
enumerator4829@sh.itjust.works 14 hours ago
Tape will survive, SSDs will survive. Spinning rust will die
AnUnusualRelic@lemmy.world 6 hours ago
I’m about to build a home server with a lot of storage (relatively, around 6 or 8 times 12 TB as a ballpark), and I didn’t even consider anything other than spinning drives so far.
nucleative@lemmy.world 5 hours ago
Because spinning disks are a bit cheaper than SSD?
AnUnusualRelic@lemmy.world 4 hours ago
Especially for large sizes.
LodeMike@lemmy.today 1 day ago
So can someone make 3.5" SSDd then???
Eldritch@lemmy.world 1 day ago
They can be made any size. Most SATA SSD are just a plastic housing around a board with some chips on it. The right question is when will we have a storage technology with the durability and reliability of spinning magnetized hard drive platters. The man flash chips used in most SSD and m.2 are much more reliable than they were initially. But for long-term retention Etc. Are still off quite good bit from traditional hard drives. Hard drives can sit for about 10 years generally before bit rot becomes a major concern. Nand flash is only a year or two.
db2@lemmy.world 1 day ago
Longer if it has some kind of small power. I think I read that somewhere.
enumerator4829@sh.itjust.works 1 day ago
Why? We can cram 61TB into a slightly overgrown 2.5” and like half a PB per rack unit.
LodeMike@lemmy.today 1 day ago
Because we don’t have to pack it in too much. It’d be higher capacities for cheaper for consumers
ramble81@lemm.ee 21 hours ago
Given that there are already 32TB 2.5” SSDs, what does a 3.5” buy you that you couldn’t get with an adapter?
KinglyWeevil@lemmy.dbzer0.com 21 hours ago
Native slotting into server drive cages. No concerns about alignment with the front or back.
earphone843@sh.itjust.works 19 hours ago
They should be cheaper since theres a bunch more space to work with. You don’t have to make the storage chips as small.
synicalx@lemm.ee 16 hours ago
A big heat sink like they used to put on WD Raptor drives.
Appoxo@lemmy.dbzer0.com 11 hours ago
A better price as low density chips are cheaper.
And you can fit in more of those in a bigger space = Cheaper.LodeMike@lemmy.today 21 hours ago
Build quality
xyguy@startrek.website 21 hours ago
Relevant video about the problems with high capacity ssds.
jj4211@lemmy.world 3 hours ago
I’m not particularly interested to watch a 40 minute video, so I skinned the transcript a bit.
As my other comments show, I know there are reasons why 3.5 inch doesn’t make sense in SSD context, but I didn’t see anything in a skim of the transcript that seems relevant to that question. They are mostly talking about storage density rather than why not package bigger (and that industry is packaging bigger, but not anything resembling 3.5", because it doesn’t make sense).
AnUnusualRelic@lemmy.world 6 hours ago
Fourty minutes? Yeah, no. How about an equivalent text that can be parsed in five?
Valmond@lemmy.world 23 hours ago
I want them like my 8" floppies!
Korhaka@sopuli.xyz 12 hours ago
Probably at some point as prices per TB continue to come down. I don’t know anyone buying a laptop with a HDD these days. Can’t imagine being likely to buy one for a desktop ever again either. Still got a couple of old ones active (one is 11 years old) but I do plan to replace them with SSDs at some point.
doodledup@lemmy.world 11 hours ago
But you don’t need 32TB storage per disc in your laptop.
Korhaka@sopuli.xyz 7 hours ago
Then how will I fit my porn folder on it?
echodot@feddit.uk 10 hours ago
I don’t know that sounds like a reasonable size for the new GTA.
pr0sp3kt@lemmy.dbzer0.com 6 hours ago
I had a terrible experience throughout all my life with HDDs. Slow af, sector loss, corruption, OS corruption… I am traumatized. I got 8TB NvMe for less than $500… Since then I got not a single trouble (well except I n electric failure, BTRFS CoW tends to act weird and sometimes doesnt boot, you need manual intervention)
AngryCommieKender@lemmy.world 4 hours ago
Sounds like you may not be making enough sacrifices to The Omnisiah
Sixtyforce@sh.itjust.works 22 hours ago
I’ll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P
Appoxo@lemmy.dbzer0.com 11 hours ago
Considering the high prices for high density SSD chips…
Why are there no 3.5" SSDs with low density chips?jj4211@lemmy.world 8 hours ago
Not enough of a market
The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.
3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.
Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.
NeuronautML@lemmy.ml 10 hours ago
I doubt it. SSDs are subject to quantuum tunneling. This means if you don’t power up an SSD once in 2-5 years, your data is gone. HDDs have no such qualms. So long as they still spin, there’s your data and when they no longer do, you still have the heads inside.
So you have a use case that SSDs will never replace, cold data storage.
floquant@lemmy.dbzer0.com 8 hours ago
Sorry dude, but bit rot is a very real thing on HDDs. They’re magnetic media, which degrades over time. If you leave a disk cold for 2-5 years, there’s a very good chance you’ll get some bad sectors. SSDs aren’t immune from bit rot, but that’s not through quantum tunneling - not any more than your CPU is affected by it at least.
NeuronautML@lemmy.ml 36 minutes ago
I did not meant to come across as saying that HDDs don’t suffer bit rot. However, there are specific long term storage HDDs that are built specifically to be powered up sporadically and resist external magnetic influences on the tape. In a proper storage environment they will last over 5 years without being powered up and still retain all information. I know it because i use them. Conversely there are no such long term storage SSDs.
SSDs store information through trapped charges which most certainly lose charge through quantuum tunneling. As insulation loses effectiveness, the potential barrier for the charge allows for what is normally a manageable effect, much like in the CPU like you said, to become out of the scope of error correction techniques. This is a physical limitation that cannot be overcome.
n2burns@lemmy.ca 7 hours ago
Nothing in this article is talking about cold storage. And if we are talking about cold storage, as others gave pointed out, HHDs are also not a great solution. LTO (magnetic tape) is the industry standard for a good reason!
NeuronautML@lemmy.ml 35 minutes ago
Tape storage is the gold standard but it’s just not realistically applicable to low scale operations or personal data storage usage.
MonkderVierte@lemmy.ml 8 hours ago
You’re wrong. HDD need about as much frequently powering up as SSD, because the magnetization gets weaker.
NeuronautML@lemmy.ml 25 minutes ago
Here’s a copy paste from superuser that will hopefully show you that what you said is incorrect in a way i find expresses my thoughts exactly
Magnetic Field Breakdown
Most sources state that permanent magnets lose their magnetic field strength at a rate of 1% per year. Assuming this is valid, after ~69 years, we can assume that half of the sectors in a hard drive would be corrupted (since they all lost half of their strength by this time). Obviously, this is quite a long time, but this risk is easily mitigated - simply re-write the data to the drive. How frequently you need to do this depends on the following two issues (I also go over this in my conclusion).
floquant@lemmy.dbzer0.com 4 hours ago
Note that for HDDs, it doesn’t matter if they’re powered or not. The platter is not “energized” or refreshed during operation like an SSD is. Your best bet is to have some kind of parity to identify and repair those bad bits.
solrize@lemmy.world 21 hours ago
Hdds we’re a fad, I’m waiting for the return of tape drives. 500TB on a $20 cartridge and I can live with the 2 minute seek time.
AnUnusualRelic@lemmy.world 6 hours ago
It’s not a real hard disk unless you can get it to walk across the server room anyway.
earphone843@sh.itjust.works 20 hours ago
Tape drives are still definitely a thing.
Appoxo@lemmy.dbzer0.com 11 hours ago
If you exclude the introductory price of the drive and needing specialized software to read/write to it it’s very affordable €/TB
MangoPenguin@lemmy.blahaj.zone 6 hours ago
Tapes are still sold in pretty high densities, don’t have to wait!
thejml@lemm.ee 22 hours ago
Meanwhile Western Digital moves away from SSD production and back to HDDs for massive storage of AI and data lakes and such: techspot.com/…/107039-western-digital-exits-ssd-m…
Mataresian@lemmy.dbzer0.com 15 hours ago
Yea but isn’t that more because SanDisk is going to fully focus on that? Or what am I missing?
Appoxo@lemmy.dbzer0.com 11 hours ago
That SanDisk is it’s own company now.
But I don’t k ow if they are still a subsidiary or completely spun of WD.
NOT_RICK@lemmy.world 1 day ago
Spinning rust is a funny way of describing HDDs, but I immediately get it
twice_hatch@midwest.social 5 hours ago
“in enterprises” oh lol
_chris@lemmy.world 21 hours ago
My datacenter is 80% nvme at this point. Just naturally. It’s crazy.
doodledup@lemmy.world 11 hours ago
Nvme is terrible value for storage density. There is no reason to use it except when you need the speed and low latency.
jj4211@lemmy.world 7 hours ago
There’s a cost associated with making that determination and managing the storage tiering. When the NVME is only 3x more expensive per amount of data compared to HDD at scale, and “enough” storage for OS volume at the chepaest end where you can either have a good enough HDD or a good enough SDD at the same price, then OS volume just makes sense to be SSD.
In terms of “but 3x is pretty big gap”, that’s true and does drive storage subsystems, but as the saying has long been, disks are cheap, storage is expensive. So managing HDD/SDD is generally more expensive than the disk cost difference anyway.
BTW, NVME vs. non-NVME isn’t the thing, it’s NAND v. platter. You could have an NVME interfaced platters and it would be about the same as SAS interfaced platters or even SATA interfaced. NVME carried a price premium for a while mainly because of marketing stuff rather than technical costs. Nowadays NVME isn’t too expensive. One could make an argument that number of PCIe lanes from the system seems expensive, but PCIe switches aren’t really more expensive than SAS controllers, and CPUs have just so many innate PCIe lanes now.
pastermil@sh.itjust.works 17 hours ago
Just replace then all with flash, along with bluray (or other optical storage) for archival.
randompasta@lemmy.today 23 hours ago
Just like magnetic tape! Oh wait…
MangoPenguin@lemmy.blahaj.zone 6 hours ago
Wouldn’t a HDD based system be like 1/10th the price? I don’t know if HDDs are going away any time soon.
Nomecks@lemmy.ca 4 hours ago
Spinning platter capacity can’t keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.
fuckwit_mcbumcrumble@lemmy.dbzer0.com 3 hours ago
For servers physical space is also a huge concern. 2.5” drives cap out at like 6tb I think, while you can easily find an 8tb 2.5” SSD anywhere. We have 16tb drives in one of our servers at work and they weren’t even that expensive. (Relatively)
Aux@feddit.uk 1 hour ago
You can put multiple 8 gig m.2 ssds into 2.5" slot.
jj4211@lemmy.world 4 hours ago
The disk cost is about a 3 fold difference, rather than order of magnitude now.
These disks didn’t make up as much of the costs of these solutions as you’d think, so a dish based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.
The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pice generation.
Natanael@infosec.pub 5 hours ago
It’s losing cost advantages as time goes. Long term storage is still on tape (and that’s actively developed too!), and flash is getting cheaper, and spinning disks have inherent bandwidth and latency limits. It’s probably not going away entirely, but it’s main usecases are being squeezed on both ends