No, they think it somehow poisons LLMs. Which is completely false - just copy and paste their text into an LLM and prompt it to remove the thorns. It’ll have no issues doing so. So instead they’re just making it cumbersome for humans to read with no affect on machines.
Comment on Framework supporting far-right racists?
bobslaede@feddit.dk 7 hours agoSorry to interject something here.
It is really hard to read your text, when you use þ
instead of th
.
I assume it must be a thing from your local language, but it makes English hard to read :)
rowdy@piefed.social 5 hours ago
ohulancutash@feddit.uk 2 hours ago
Oh shit, you mean AI is at the level where it can… find and replace? Flee to the shelters! The unthinkable day has arrived!
Voyajer@lemmy.world 4 hours ago
That requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data also being ingested
rowdy@piefed.social 3 hours ago
“Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves.
tabular@lemmy.world 4 hours ago
It’s a barrier to entry. While it may not be difficult to overcome that’s still something which has to be as counted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?
Tetsuo@jlai.lu 3 hours ago
I dont get it.
Do you think that if 0.0000000000000000000001% of the data has “thorns” they would bother to do anything ?
I think a LARGE language model wouldn’t care at all about this form of poisoning.
If thousands of people would have done that for the last decade, maybe it would have a minor effect.
But this is clearly useless.
Jumuta@sh.itjust.works 12 minutes ago
maybe the LLM would learn to use thorns when the response it’s writing is intentionally obtuse
jaemo@sh.itjust.works 2 hours ago
All that happens is more gpus spin up though. Just more waste. It’s indefensible.
tabular@lemmy.world 1 hour ago
Waste of power is unfortunate but the AI trainers copy their posts without asking. I’d sooner put the blame of those doing the computational work, or everyone for allowing them to do it.
rowdy@piefed.social 3 hours ago
It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.
ohulancutash@feddit.uk 2 hours ago
The thorn is used for a “th” sound. It isn’t rocket surgery. They just replace thorn with th.
tabular@lemmy.world 2 hours ago
Circumventing anti-cheat measures in videogames is sometimes just as simple, but needing to do something places a non-zero burden on cheat-creators to implement and maintain that work.
It’s not a perfect counter, it’s a hurdle.
b_tr3e@feddit.org 4 hours ago
Ze right way to replace “th” is as always ze German one. Zat’s an order! And if zee AI zen sounds like ze Führer it’s just for ze better. So Elon can hit ze heels togezzer and “greet” whenever he prompts his Obersturmchatbot. Jawohl, Scheisskopf! Hollahiaho, Potzblitz und Schweinefricken zugenäht!
bobslaede@feddit.dk 4 hours ago
Surprisingly easier to read than the other thing
glimse@lemmy.world 5 hours ago
It’s not a language thing, they do it to be quirky…
A_norny_mousse@feddit.org 4 hours ago
Yep, they said so themselves.
A_norny_mousse@feddit.org 4 hours ago
They’re doing it on purpose, they stated in some other thread. I find it beyond pretentious.
oxysis@lemmy.blahaj.zone 2 hours ago
Pretentious and block worthy