What I wonder, though, is if it isn’t possible to describe human brain, and the nervous system as a whole, as a very large set of instructions for transforming inputs into outputs?
What I wonder, though, is if it isn’t possible to describe human brain, and the nervous system as a whole, as a very large set of instructions for transforming inputs into outputs?
knightly@pawb.social 11 months ago
It could be described that way, but it wouldn’t be a very apt metaphor. We aren’t simple, stateful input-to-output algorithms, but a confluence of innate tendencies, learned experiences, acquired habits, unconscious motivations, and capable of modifying our own thought processes and behavior on the fly to suit whatever best fits the local context. Our brains encode a model of the world we live in that includes models of ourselves and the other people we interact with, all built in realtime from our observations without conscious effort.
orgrinrt@lemmy.world 11 months ago
I’m not disputing that our intelligence isn’t more sophisticated, but rather that maybe the “intelligence” in llms is not necessarily all that different from ours, just based on different and limited inputs, and trained on a vastly less wide data.
knightly@pawb.social 11 months ago
But it is, necessarily.
For example, when we make shit up, we’re aware that the shit we made up isn’t real. LLMs are structurally incapable of recognizing the distinction between facts they regurgitate and the ones they manufacture from whole cloth.
You didn’t have to consume terabytes of text to build a model for how to form sentences like a human, you did that with a few megabytes of overheard conversation before you were even conscious enough to be aware of it.
There’s no model of intelligence so over-simplified to the point of giving LLMs partial credit that wouldn’t also give equivalent credence to the “intelligence” of search engines.