I was so blind sided by the fact that the tech isn’t for consumers that I forgot to mention the r/w speeds
Comment on This long-term data storage will last 14 billion years
boring_bohr@feddit.org 1 day agoIn case you missed it in the article, the transfer speeds are mentioned just two paragraphs prior to the one you cited:
Over the next three to four years, Kazansky said, SPhotonix aims to improve the data transfer speed of its technology from a write time of 4 megabytes per second (MBps) and read time of 30 MBps to a read/write speed of 500 MBps, which would be competitive with archival tape backup systems.
ieatpwns@lemmy.world 21 hours ago
GamingChairModel@lemmy.world 18 hours ago
Writing 360 TB at 4 MB/s will take over 1000 days, almost 3 years. Retrieving 360 TB at a rate of 30 MB/s is about 138 days. That capacity to bitrate ratio that is going to be really hard to use in a practical way, and it’ll be critical to get that speed up. Their target of 500 MB/s is still more than 8 days to read or write the data from one storage platter.
kuberoot@discuss.tchncs.de 18 hours ago
One counterpoint - even with a weak speed to capacity ratio it could be very useful to have a lot of storage for incremental backup solutions, where you have a small index to check what needs to be backed up, only need to write new/modified data, and when restoring you only need to read the indexes and the amount you’re actually restoring. This saves time writing the data and lets you keep access to historical versions.
There’s two caveats here, of course, assuming those are not rewritable. One, you need to be able to quickly seek to the latest index, which can’t reliably be at the start, and two, you need a format that works without rewriting any data, possibly with a footer (like tar or zip, forgot which one), which introduces extra complexity (though I foresee a potential trick where the previous index can leave an unallocated block of data to write the address of the next index, to be written later)