It wasn’t a dox attempt though. The blog just collected information that was already publicly available on other sites.
RobotToaster@mander.xyz 2 days ago
Everyone seems to be ignoring the fact that he only did this in response to a malicious dox attempt.
dan@upvote.au 2 days ago
betterdeadthanreddit@lemmy.world 2 days ago
As they should since it doesn’t matter.
SnotFlickerman@lemmy.blahaj.zone 2 days ago
Yeah, someone being shitty to you doesn’t mean you full-fledged shitty in return, it kind of proves your lack of trustworthiness to begin with. It’s like Nazis being like “leftists were mean to me by explaining how my politics made me a Nazi, so I’m gonna show them by Nazi-ing even harder!” It kind of betrays the argument that the reason you got that way was because leftists were mean to you.
Anon518@sh.itjust.works 2 days ago
Unfortunately, they shot themselves in the foot by responding the way they did. They basically did the job of anyone who wants them taken down and not trusted. It was probably the worst way they could have reacted. Such a tragedy to lose such a valuable website.
deathbird@mander.xyz 1 day ago
Yeah, ESH. His response of editing an archive showed the site to be unreliable as an archive. DDOSing from the site as a counter to the dox attempt caused the site serious reputational harm as well.
It sucks because his site was actually more reliable than The Internet Archive.
adespoton@lemmy.ca 2 days ago
He only modified archived pages in response to a dox attempt?
And the thing is, the discovery of the modified pages revealed that it wasn’t even the first time he’d modified pages. And he used a real person’s identity to try and shift blame.
Irrespective of the doxxing allegations, if he’s done all this multiple times already, it means the page archives can’t be trusted AND there’s no guarantee that anything archived with the service will be available tomorrow.
Seems like we need to switch to URLs that contain the SHA256 of the page they’re linking to, so we can tell if anything has changed since the link was created.
deathbird@mander.xyz 1 day ago
Actually a pretty good idea.
adespoton@lemmy.ca 1 day ago
Only works for archived pages though, because for any regular page, a large portion of the page will be dynamically generated; hashing the HTML will only say the framework hasn’t changed.
conorab@lemmy.conorab.com 1 day ago
You would need a way of verifying that the SHA256 is a true copy of the site at the time though and not a faked page. You could do something like have a distributed network of archives that coordinate archival at the same time and then using the SHA256 then be able to see which archives fetched exactly the same page at the same time through some search functionality. I mean if addons are already being used for doing the crawling then we may be mostly there already since said addons would just need to certify their archive and after that they can discard the actual copy of the page. You need need a way to validate those workers though since a bad actor could just run a whole bunch at the same time to legitimise a fake archival.