Comment on I'm looking for an article showing that LLMs don't know how they work internally

<- View Parent
ipkpjersi@lemmy.ml ⁨2⁩ ⁨days⁩ ago

“the ability to satisfy goals in a wide range of environments”

That was not the definition of AGI even back before LLMs were a thing.

Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

That’s doing a disservice to AGI.

Do you not agree that animal brains are just prediction machines?

That’s doing a disservice to human brains. Humans are sentient, LLMs are not sentient.

I don’t really agree with you.

LLMs are damn impressive, but they are very clearly not AGI, and I think that’s always worth pointing out.

source
Sort:hotnewtop