Comment on AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
brsrklf@jlai.lu 3 days agoYeah, part of the usual “it’s not bad, you’re using it wrong” arsenal. Definitely not the clever hack they think it is.
This probably has as much potential to create new errors as to find old ones. LLMs are trained to be “helpful”, if you tell it with total confidence something is wrong, it will answer like there is something to correct, and anything will do.
So even if it had something about right to begin with, now it will thank you for your “insightful” question and output some bullshit to please you.