Comment on ‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

<- View Parent
CosmoNova@lemmy.world ⁨5⁩ ⁨days⁩ ago

And it’s easy to figure out why or at least I believe it is.

LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)

My guess is they act all knowingly because of how interactions work on the internet.

source
Sort:hotnewtop