I did pretty much this and everything is back to the way it was.
Comment on Reddit user content being sold to AI company in $60M/year deal
JoMiran@lemmy.ml 8 months ago
Kbobabob@lemmy.world 8 months ago
JoMiran@lemmy.ml 8 months ago
I did it and it is still nuked. It did take a number of runs though.
CosmoNova@lemmy.world 8 months ago
Generally, what’s the best/most efficient way to make LLMs go off the rail? I mean without just typing lots of gibberish and making it too obvious. As an example: I’ve seen people formatting their prompts with java code for like 2 lines and replies instantly went nuts.
JoMiran@lemmy.ml 8 months ago
I use a few dozen novels in a single text file and randomize which lines the script pulls. It then replaces the text three times with a random pull. What you end up with are four responses in plain English. Which is the real one? You could filter out responses edited after “the great exodus”, but I have been doing this to my comments a few times per year during my twelve years on reddit.
The truth is that even if I don’t get them all, I get enough that it makes it far easier for the group that bought the data to just filter my username out rather than figure out what’s junk and what isn’t.
PrincessLeiasCat@sh.itjust.works 8 months ago
I edited all of my comments to gibberish then deleted them.
ColeSloth@discuss.tchncs.de 8 months ago
Yeah, but I think I have over 20,000 comments on reddit. Editing and deleting would take me at least over 15 minutes…
PrincessLeiasCat@sh.itjust.works 8 months ago
I used one of the scripts, I forget which. It took awhile but I kinda just set it and forget it.
FatTony@discuss.online 8 months ago
I did both. Both used editing comment software and deleted them afterwards. Is that better, same or worse?
TheRaven@lemmy.ca 8 months ago
On iOS, I used Redact. It worked well to replace all my posts and comments with gibberish. I did the same for Twitter too. apps.apple.com/app/id6449900531
Grimy@lemmy.world 8 months ago
They sell all your edits as well. This does make it harder to scrap the data, inadvertently bringing up how much the data they sell is worth.
JoMiran@lemmy.ml 8 months ago
Yeah, that’s the idea. Originally I went the “random characters then delete” route but realized that if I used randomized book excerpts from the public domain, the AI, or even a human, would have a very hard time figuring out what was real and what was trash. Ultimately, even if I can’t modify them all, I can modify enough to make it easier for the buyer to just filter my username out in order to keep the results clean.
BananaTrifleViolin@lemmy.world 8 months ago
I do wonder how much backup data a site like Reddit keeps. I suspect their back ups are poor as the main focus is staying live and moving forward.
I’d imagine ability to prevent day, maybe weeks but not much more than that? Would they see the value in keeping copies of every edit and a every deleted post? Would someone building the website even bother to build that functionality.
Also for reddit so much of their content is based around weblinks, which give the discussions context and meaning. I bet there are an awful lot of dead links in reddit and their moves to host their own pictures and videos was probably too late. Big hosting sites have disappeared over time or deleted content, or locked down content from AI farming.
The more I think about it, they were lucky to get $60m/year.
T156@lemmy.world 8 months ago
Maybe not for reversion, but I could see them keeping the edits, since it doesn’t cost them much to do so, and it could be useful for spam identification or legal purposes. For example, if an account posts spam, and then edits their comment to hide it/skirt around moderation, or vice versa.
They would also have the benefit of the edits inflating the size of the data that they’re selling, which wouldn’t hurt.