Comment on We have to stop ignoring AI’s hallucination problem

<- View Parent
Danksy@lemmy.world ⁨1⁩ ⁨month⁩ ago

It’s not a bug, it’s a natural consequence of the methodology. A language model won’t always be correct when it doesn’t know what it is saying.

source
Sort:hotnewtop