And additionnaly, isn’t there a way to exploit this so we can store more stuff on PCs?
You have a notebook. On the first page, you put a table of contents. As you fill in pages, you note them down in the table of contents at the start.
When you want to delete a line, instead of erasing the whole page now (there are hundreds free still, why waste the effort), you erase the entry in the table of contents.
Now if someone finds your notebook, according to the table of contents there is no file at page X. But if they were to look through every single page, they would be able to find the page eventually.
This is loosely how file systems work. You can’t really use it to boost storage, the number of pages is finite, and if you need to write a new page, anything not listed in the contents is fair game to be overwritten.
zeppo@lemmy.world 2 months ago
Because of how filesystems work. There’s basically an index that tells the OS what files are stored where on the disk. The quickest way of deletion simply removes the entry in that table. The data is still there, though. So a data recovery program would read the entire disk and try to rebuild the file allocation table or whatever by detecting the beginning and ends of files. This worked better on mechanical drives than SSDs.
pearsaltchocolatebar@discuss.online 2 months ago
Yup, and many security suites will include a tool that writes all 0s or garbage to those sectors so the data can’t be recovered as easily (you really need multiple passes for it to be gone for good).
zeppo@lemmy.world 2 months ago
right, i’m super out of date but you;d want to do shred or some dd dev/random > device thing to securely erase them.