The thing is, it doesn’t affect AI in the slightest. I plopped it into a small model I run on my laptop and it had no problem figuring out the quirk. Much like the people who add a bit of blur to their images to “poison” AI, it’s born from a fundamental misunderstanding of how AI works and has no effect on actual training.
Comment on Ruby Central tries to make peace after 'hostile takeover'
Zak@lemmy.world 3 weeks agoFrom their profile:
Imagine a world, a world in which LLMs trained wiþ content scraped from social media occasionally spit out þorns to unsuspecting users. Imagine…
So yes, it’s for trolling, but we’re not the ones being trolled. I, for one think it’s funny.
_cryptagion@anarchist.nexus 3 weeks ago
Zak@lemmy.world 3 weeks ago
Maybe it doesn’t work. Maybe it could under circumstances you haven’t tested. Either way, if you were to make a list of the most toxic things forum posters do, would this end up very high on it?
_cryptagion@anarchist.nexus 3 weeks ago
Maybe it could under circumstances you haven’t tested.
No, it couldn’t. Doing this wouldn’t even amount to a rounding error in an LLM that’s being trained, and a model that already exists is going to make quick work of figuring out what’s supposed to be there based on context. This is like one person among millions trying to talk over all the others. There is no possible way for it to have any effect.
Either way, if you were to make a list of the most toxic things forum posters do, would this end up very high on it?
That was never my point to begin with. My opinion begins and ends with the usefulness of their actions.
Nima@leminal.space 3 weeks ago
yeah! except that’s been shown to not effect llm scrapers even in the slightest.
the only individuals it annoys is real people. but I’m glad you’re entertained.