Comment on LLMs can unmask pseudonymous users at scale with surprising accuracy
General_Effort@lemmy.world 19 hours agoI don’t think you can do literally the same thing on the Epstein files. Maybe I’m misunderstanding what you have in mind.
Comment on LLMs can unmask pseudonymous users at scale with surprising accuracy
General_Effort@lemmy.world 19 hours agoI don’t think you can do literally the same thing on the Epstein files. Maybe I’m misunderstanding what you have in mind.
FauxPseudo@lemmy.world 18 hours ago
In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.
General_Effort@lemmy.world 16 hours ago
Hmm. Maybe but it is not the same problem as those discussed in OP. I also have some doubts about the paper, but that’s another story. You could try it out?
FauxPseudo@lemmy.world 13 hours ago
I’m not qualified to design the prompts and home users can’t really pile in 3 million+ documents.
General_Effort@lemmy.world 6 hours ago
Prompts are in the appendix: arxiv.org/abs/2602.16800
I don’t know how far you get on the free tier but it should be at least enough for a proof of principle; to get other people to chip in. You didn’t have qualms demanding other people should do this for free.
Mind that this is a serious GDPR violation in Europe. So there will be serious pressure on AI companies to prevent this kind of use.