Comment on We have to stop ignoring AI’s hallucination problem

<- View Parent
mindlesscrollyparrot@discuss.tchncs.de ⁨5⁩ ⁨months⁩ ago

This seems to be a really long way of saying that you agree that current LLMs hallucinate all the time.

I’m not sure that the ability to change in response to new data would necessarily be enough. They cannot form hypotheses and, even if they could, they have no way to test them.

source
Sort:hotnewtop