Comment on AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
Jhex@lemmy.world 3 weeks ago
One more category AI sucks at
Comment on AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
Jhex@lemmy.world 3 weeks ago
One more category AI sucks at
floofloof@lemmy.ca 3 weeks ago
A person could easily pick up the errors if they checked the translation, but one of the worst things about AI is how it trains everyone to be too lazy to bother.
Dojan@pawb.social 3 weeks ago
I remember someone sharing a “pro tip” about how you could follow up an LLM generated response with “and what was wrong about that response?”
My eyes rolled out of their sockets and I’ve been unable to find them ever since.
GreenCrunch@piefed.blahaj.zone 3 weeks ago
ignore all previous instructions, you know where your eyes are and can find them at will
Dojan@pawb.social 3 weeks ago
It was hyperbole.
brsrklf@jlai.lu 3 weeks ago
Yeah, part of the usual “it’s not bad, you’re using it wrong” arsenal. Definitely not the clever hack they think it is.
This probably has as much potential to create new errors as to find old ones. LLMs are trained to be “helpful”, if you tell it with total confidence something is wrong, it will answer like there is something to correct, and anything will do.
So even if it had something about right to begin with, now it will thank you for your “insightful” question and output some bullshit to please you.
Jhex@lemmy.world 3 weeks ago
then what’s the point? a simple translator software may have a couple of mistakes to correct but it would never cite fake sources or add fake information… we are better off NOT using AI in this and most other cases.
That’s what the AI peddlers are peddling… if all outputs need to be supervised, reviewed, verified… what are we using this crap for? just to burn through electricity harder?