Comment on How AI and Wikipedia have sent vulnerable languages into a doom spiral
HereIAm@lemmy.world 3 days agoI understand you’re trying to be nice to minority languages, but if you write research papers you either limit your demographic to your own country, or you publish in English (I guess Spanish is pretty world wide as well). If you set out to read a new paper in your field, I doubt you’d pick up something in Mongolian.
Even in Sweden I would write a serious paper in English, so that more of the world could read it. Yes, we have text books for our courses that are in Swedish, but i doubt there are many books covering LLMs being published currently for example.
chloroken@lemmy.ml 3 days ago
I’m not “trying to be nice to minority languages”, I’m directly pushing back against the chauvinistic idea that Wikipedia is so important that those without it are somehow inferior.
As for scientific papers, it’s called a translation. One can write academic literature in one’s native langaue and have it translated for more reach. That isnt the case with Wikipedia which is constantly being edited.
HereIAm@lemmy.world 3 days ago
No one is saying those who can’t access or reqd English wikipedia is inferior. The issue here is when what is on a non-english wikipedia article is misleading or flat out harmful (like the article says about growing crops), because of juvenile attempts at letting machine translations getting it very wrong. So what Greenland did was shut down its poorly translated and maintained wiki site instead of letting it fester with misinformation. And this issue compounding when LLMs scrape Wikipedia as a source to learn new languages.
Alaknar@sopuli.xyz 3 days ago
I think you missed the problem described here.
The “doom spiral” is not because of English Wiki, it has nothing to do with anything.
The problem described is that people who don’t know a “niche” language try to contribute to a niche Wiki by using machine translation/LLMs.
As per the article:
Now, another problem is Model Collapse (or, well, a similar phenomenon in strictly in terms of language itself).
We now have a bunch of “niche” languages’ Wikis containing such errors… that are being used to train machine translators and LLMs to handle these languages. This is contaminating their input data with errors and hallucinations, but since this is the training data, these LLMs consider everything in there as the truth, propagating the errors/hallucinations forward.
I honestly have no clue where you’re getting anything chauvinistic here.
AA5B@lemmy.world 3 days ago
Is it even getting misused? Spreading knowledge via machine translation where there are no human translators available, had to be better than not translating. As long as there is transparency so people can judge the results ……
And ai training trusting everything it reads is a larger systemic issue, not limited to this niche.
Perhaps part of the solution is machine readable citations. Maybe a search engine or ai could provide better results if it knew what was human generated vs machine generated. But even then you have huge gaps on one side with untrustworthy humans (like comedy) and on the other side with machine generated facts such as from a database
Alaknar@sopuli.xyz 3 days ago
Have you not read my entire comment…?
One of the Greenlandic Wiki articles “claimed Canada had only 41 inhabitants”. What use is a text like that? In what world is learning that Canada has 41 inhabitants better than going to the English version of the article and translating it yourself?
The contents of the citations are already used for training, as long as they’re publicly available. That’s not the problem. The problem is that LLMs do not understand context well, they are not, well, intelligent.
The “Chinese Room” thought experiment explains it best, I think: imagine you’re in a room with writing utensils and a manual. Every now and again a letter falls in to the room through a slit in the wall. Your task is to take the letter and use the manual to write a response. If you see such and such shape, you’re supposed to write this and that shape on the reply paper, etc. Once you’re done, you throw the letter out through the slit. This goes back and forth.
To the person on the other side of the wall it seems like they’re having a conversation with someone fluent in Chinese whereas you’re just painting shapes based on what the manual tells you.
LLMs don’t understand the prompts - they generate responses based on the probability of certain characters or words or sentences being next to each other when the prompt contains certain characters, words, and sentences. That’s all there is.
There was a famous botched experiment where scientists where training an AI model to detect tumours. It got really accurate on the training data so they tested it on new cases gathered more recently. It gave a 100% certainty of a tumour being present if the photograph analysed had a yellow ruler on it, because most photos of tumours in the training data had that ruler for scale.
“Machine generated facts” are not facts, they’re just hallucinations and falsehoods. It is 100% better to NOT have them at all and have to resort to the English wiki, than have them and learn bullshit.
Especially because, again, the contents of the Wikipedia are absolutely being used for training further LLM models. The more errors there are, the worse the models become eventually leading to a collapse of truth. We are already seeing this with whole “research” publications being generated, including “source” material invented on the spot, proving bogus results.
DoPeopleLookHere@sh.itjust.works 3 days ago
Assumes the AI is accurate, which is debatable