Comment on I'm looking for an article showing that LLMs don't know how they work internally

<- View Parent
Excrubulent@slrpnk.net ⁨2⁩ ⁨days⁩ ago

You’re definitely overselling how AI works and underselling how human brains work here, but there is a kernel of truth to what you’re saying.

Neural networks are a biomimicry technology. They explicitly work by mimicking how our own neurons work, and surprise surprise, they create eerily humanlike responses.

The thing is, LLMs don’t have anything close to reasoning the way human brains reason. We are actually capable of understanding and creating meaning, LLMs are not.

So how are they human-like? Our brains are made up of many subsystems, each doing extremely focussed, specific tasks.

We have so many, including sound recognition, speech recognition, language recognition. Then on the flipside we have language planning, then speech planning and motor centres dedicated to creating the speech sounds we’ve planned to make. The first three get sound into your brain and turn it into ideas, the last three take ideas and turn them into speech.

We have made neural network versions of each of these systems, and even tied them together. An LLM is analogous to our brain’s language planning centre. That’s the part that decides how to put words in sequence.

That’s why LLMs sound like us, they sequence words in a very similar way.

However, each of these subsystems in our brains can loop-back on themselves to check the output. I can get my language planner to say “mary sat on the hill”, then loop that through my language recognition centre to see how my conscious brain likes it. My consciousness might notice that “the hill” is wrong, and request new words until it gets “a hill” which it believes is more fitting. It might even notice that “mary” is the wrong name, and look for others, it might cycle through martha, marge, maths, maple, may, yes, that one. Okay, “may sat on a hill”, then send that to the speech planning centres to eventually come out of my mouth.

Your brain does this so much you generally don’t notice it happening.

In the 80s there was a craze around so called “automatic writing”, which was essentially zoning out and just writing whatever popped into your head without editing. You’d het fragments of ideas and really strange things, often very emotionally charged, they seemed like they were coming from some mysterious place, maybe ghosts, demons, past lives, who knows? It was just our internal LLM being given free rein, but people got spooked into believing it was a real person, just like people think LLMs are people today.

In reality we have no idea how to even start constructing a consciousness. It’s such a complex task and requires so much more linking and understanding than just a probabilistic connection between words. I wouldn’t be surprised if we were more than a century away from AGI.

source
Sort:hotnewtop