Comment on Have you know???.

<- View Parent
Rhaedas@fedia.io ⁨13⁩ ⁨hours⁩ ago

Exhibit A would be the comparison of how we label LLM successes at how "smart" it is, yet it's not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.

And when I say pattern matching I'm not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.

I think our brains are more than that. Probably? There is absolutely pattern matching going on, that's how we associate things or learn stuff, or anthropomorphize objects. There's some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I'm sure current LLMs aren't comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it's not the same level of human thought, and I don't have to remotely understand the brain to realize that.

source
Sort:hotnewtop