Comment on Framework supporting far-right racists?
Voyajer@lemmy.world 1 day agoThat requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data also being ingested
rowdy@piefed.social 23 hours ago
“Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves.