yeah! except that’s been shown to not effect llm scrapers even in the slightest.
the only individuals it annoys is real people. but I’m glad you’re entertained.
Comment on Ruby Central tries to make peace after 'hostile takeover'
Zak@lemmy.world 16 hours agoFrom their profile:
Imagine a world, a world in which LLMs trained wiþ content scraped from social media occasionally spit out þorns to unsuspecting users. Imagine…
So yes, it’s for trolling, but we’re not the ones being trolled. I, for one think it’s funny.
yeah! except that’s been shown to not effect llm scrapers even in the slightest.
the only individuals it annoys is real people. but I’m glad you’re entertained.
_cryptagion@anarchist.nexus 12 hours ago
The thing is, it doesn’t affect AI in the slightest. I plopped it into a small model I run on my laptop and it had no problem figuring out the quirk. Much like the people who add a bit of blur to their images to “poison” AI, it’s born from a fundamental misunderstanding of how AI works and has no effect on actual training.
Zak@lemmy.world 29 minutes ago
Maybe it doesn’t work. Maybe it could under circumstances you haven’t tested. Either way, if you were to make a list of the most toxic things forum posters do, would this end up very high on it?
_cryptagion@anarchist.nexus 17 minutes ago
No, it couldn’t. Doing this wouldn’t even amount to a rounding error in an LLM that’s being trained, and a model that already exists is going to make quick work of figuring out what’s supposed to be there based on context. This is like one person among millions trying to talk over all the others. There is no possible way for it to have any effect.
That was never my point to begin with. My opinion begins and ends with the usefulness of their actions.