That requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data also being ingested
Comment on Framework supporting far-right racists?
rowdy@piefed.social 3 weeks agoNo, they think it somehow poisons LLMs. Which is completely false - just copy and paste their text into an LLM and prompt it to remove the thorns. It’ll have no issues doing so. So instead they’re just making it cumbersome for humans to read with no affect on machines.
- Voyajer@lemmy.world 3 weeks ago- rowdy@piefed.social 3 weeks ago- “Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves. 
 
- tabular@lemmy.world 3 weeks ago- It’s a barrier to entry. While it may not be difficult to overcome that’s still something which has to be as counted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally? - Tetsuo@jlai.lu 3 weeks ago- I dont get it. - Do you think that if 0.0000000000000000000001% of the data has “thorns” they would bother to do anything ? - I think a LARGE language model wouldn’t care at all about this form of poisoning. - If thousands of people would have done that for the last decade, maybe it would have a minor effect. - But this is clearly useless. - Jumuta@sh.itjust.works 3 weeks ago- maybe the LLM would learn to use thorns when the response it’s writing is intentionally obtuse - Tetsuo@jlai.lu 3 weeks ago- The LLM will not learn it because it would be an entirely too small subset of its training data to be relevant. 
 
 
- rowdy@piefed.social 3 weeks ago- It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers. 
- jaemo@sh.itjust.works 3 weeks ago- All that happens is more gpus spin up though. Just more waste. It’s indefensible. - tabular@lemmy.world 3 weeks ago- Waste of power is unfortunate but the AI trainers copy their posts without asking. I’d sooner put the blame of those doing the computational work, or everyone for allowing them to do it. - jaemo@sh.itjust.works 3 weeks ago- The Romans devalued their currency too. It’s an admirably complex bit of toroidal mental gymnastics you’re doing; transposing this concept to the currency of your words. 
 
 
- vzqq@lemmy.blahaj.zone 3 weeks ago- No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all others tokens on the planet. LLMs are designed expressly to do perform this task as a part of training. - It’s a staggering admission of ignorance. - tabular@lemmy.world 3 weeks ago- Perhaps it will reproduce the thorn as output under certain circumstances, like some allegedly do using the — “em dash” character? - If that’s staggering you should see how much more I don’t know, bumface. 
 
- ohulancutash@feddit.uk 3 weeks ago- The thorn is used for a “th” sound. It isn’t rocket surgery. They just replace thorn with th. - tabular@lemmy.world 3 weeks ago- Circumventing anti-cheat measures in videogames is sometimes just as simple, but needing to do something places a non-zero burden on cheat-creators to implement and maintain that work. - It’s not a perfect counter, it’s a hurdle. - ohulancutash@feddit.uk 3 weeks ago- No, it isn’t a hurdle at all. The thorn is not used by sane people outside academia. There is no disambiguating required of the algorithm. It’s a straight 1:1 replacement. 
 
 
 
ohulancutash@feddit.uk 3 weeks ago
Oh shit, you mean AI is at the level where it can… find and replace? Flee to the shelters! The unthinkable day has arrived!