Comment on AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
Grimy@lemmy.world 3 weeks ago
All you have to do is ask for direct translation and it does it fine. This is plain incompetence.
That being said, I’ve noticed there are wild difference between articles depending on the language. Mostly, it will be added content in the home language (so the article in French about a French city will have much more info) but sometimes, especially when it comes to Hebrew and Israel, you will get different conflicting information.
They should have implemented checks for this a long time ago.
squaresinger@lemmy.world 3 weeks ago
“Just tell it to not make mistakes.”
Yeah, right.
Grimy@lemmy.world 3 weeks ago
I mean, you can test it yourself if you speak more than one language. If you ask for a direct translation and stress not to add content or change the text, it will do a very good job. Translation is a use case where LLM really shine.
tigeruppercut@lemmy.zip 3 weeks ago
Does it leave out hallucinations 100% of the time? Because otherwise why not use non-LLM translation services (which also alone don’t actually meet the standards for articles iirc).
Grimy@lemmy.world 3 weeks ago
Whatever is used, I think nothing is going to be 100% and everything should be verified by a native speaker. It is Wikipedia afterall, not some blog.
Non-LLM services are worse in my opinion but it probably depends on the language (LLMs probably struggle with certain languages as well).
squaresinger@lemmy.world 3 weeks ago
There’s a huge difference between “Creates intelligible single-use text that’s good enough that I can understand what the text is roughly about” and “Creates text at a quality high enough to work as a quotable source”.
For the first use case, infrequent hallucinations are no problem. I read it, if I understand a bit about the topic I might catch it, if not it probably doesn’t matter too much either. Especially if it’s about non-critical topics.
For the second use case, infrequent hallucinations are a massive problem. Most people who use Wikipedia use it like a primary source. Even though sources are linked, they don’t go hunting for sources but instead rely that the information in the article is accurate. Every article is read not only once by one person, but thousands or hundreds of thousands of times. That means every single line is read and believed. You can bet that if there’s a hallucination in there, someone will read it and believe it. That’s requires a completely different level of accuracy, and doing that kind of crap translation work on such a large scale as OKA is a massive disservice.
Grimy@lemmy.world 3 weeks ago
That’s why I specify that everything should be verified in a later comment. My point is that LLMs when properly guided are better than other automatic translation service, while hallucination can easily be avoided with proper prompting.
Also worth mentioning that there’s massive difference in user generated translations already, some of it is well meaning while other, like in Israel’s case, isn’t.
I translate a lot of stuff for my work, and I don’t have any problems when I instruct it properly. I’m also there to verify. I don’t have to deal with hallucinations ever, mostly just changing a word or two because I don’t like how it sounds (it uses overly complex words at times).
This is more about certain users being shit and either not checking their work or doing work they have no place doing. They would exist no matter what they use, it’s not the tools fault.
Tbh, I work in research and we would never use Wikipedia for anything. We can’t quote it and anytime I find a good tidbit on it and try and find the source, I usually get dead link or just something altogether false which doesn’t represent what the user wrote. Probably highly dependent on the subject though but the sourcing isn’t very rigorous.
Bless them though, it’s an amazing site and they are still doing an stellar job considering how big it is.