This seems like circular reasoning. SAT scores don’t measure intelligence because llm can pass it which isn’t intelligent.
Why isn’t the llm intelligent?
Because it can only pass tests that don’t measure intelligence.
You still haven’t answered what intelligence is or what an a.i. would be. Without a definition you just fall into the trap of “A.I. is whatever computers cant do” which has been going on for a while:
Computers can do arithmetic but they can’t do calculus, that requires true intelligence.
Ok computers can do calculus, but they can’t beat someone in chess, that requires true intelligence.
Ok computers can beat us in chess, but they can’t form coherent sentences and ideas, that requires true intelligence.
Ok computers can form coherent sentences but …
It’s all just moving the goal post to try and preserve some exclusively human/organic claim to intelligence.
There is one goalpost that has stayed steady, the turing test, which llm seems to have passed, at least for shorter conversation.
orgrinrt@lemmy.world 11 months ago
I’ve always wondered with stances like this, why do you assume that our “intelligence” is much different than that of llms? I mean, as much as we like to feel superior, is there anything that would really prove that our brains don’t work in a similar manner behind the curtains? What if we just get input stimuli and our mind is simply the process of figuring out the most likely answers, reactions or whatever, to that?
I haven’t seen anything to that effect, but then again my field of study is vastly different. I’d like to be enlightened certainly!
knightly@pawb.social 11 months ago
LLMs are statistical models of human writing, they only offer the appearance of intelligence in the same fashion as the Chinese Room thought experiment.
There’s nothing “intelligent” in there, just a very large set of instructions for transforming inputs into outputs.
A sufficiently advanced model of the human brain can be “intelligent” in the same way that humans are, but this would not be “artificial” since it would necessarily employ the same “natural” processes as our brains.
Until we have a model of “intelligence” itself, anyone claiming to have “AI” is just trying to sell you something.
orgrinrt@lemmy.world 11 months ago
What I wonder, though, is if it isn’t possible to describe human brain, and the nervous system as a whole, as a very large set of instructions for transforming inputs into outputs?
knightly@pawb.social 11 months ago
It could be described that way, but it wouldn’t be a very apt metaphor. We aren’t simple, stateful input-to-output algorithms, but a confluence of innate tendencies, learned experiences, acquired habits, unconscious motivations, and capable of modifying our own thought processes and behavior on the fly to suit whatever best fits the local context. Our brains encode a model of the world we live in that includes models of ourselves and the other people we interact with, all built in realtime from our observations without conscious effort.