Comment on ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw
CosmoNova@lemmy.world 5 days agoAnd it’s easy to figure out why or at least I believe it is.
LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)
My guess is they act all knowingly because of how interactions work on the internet.
vxx@lemmy.world 5 days ago
The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.