Comment on We have to stop ignoring AI’s hallucination problem

<- View Parent
5gruel@lemmy.world ⁨5⁩ ⁨months⁩ ago

I’m not convinced about the “a human can say ‘that’s a little outside my area of expertise’, but an LLM cannot.” I’m sure there are a lot of examples in the training data set that contains qualification of answers and expression of uncertainty, so why would the model not be able to generate that output? I don’t see why it would require an “understanding” for that specifically. I would suspect that better human reinforcement would make such answers possible.

source
Sort:hotnewtop