This is so over simplified.
Comment on We need to stop pretending AI is intelligent
doodledup@lemmy.world 9 months ago
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don’t say “Let’s go eat a car at McDonalds” unless we’re specifically instructed to say so.
What does consciousness even mean? If you can’t quantify it, how can you prove humans have it and LLMs don’t? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we’re not so different from LLMs afterall.
Auli@lemmy.ca 9 months ago
skisnow@lemmy.ca 9 months ago
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
Rekorse@sh.itjust.works 9 months ago
Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!
jj4211@lemmy.world 9 months ago
The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.
With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.
Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.