Comment on Major shifts at OpenAI spark skepticism about impending AGI timelines
mke@lemmy.world 3 months agoExcept LLMs don’t actually have real reasoning capacity. Hooking in different models that can translate more of the world to text could give the LLM a broader domain, but not an entirely new ability beyond its architecture. That might make it more convincing, but it would still fail in the same ways as it currently does.
doodledup@lemmy.world 3 months ago
You’re doing reasoning based on chemical reactions. Who says it can’t do reasoning based on text?
mke@lemmy.world 3 months ago
If you genuinely think LLMs are anyway capable of even basic reasoning despite all ample evidence towards the contrary, I honestly don’t care about convincing you anymore. You’re asking for a miracle out of me—to explain consciousness itself, even—while you can just say “but there’s a chance” even though LLMs can’t get basic facts right.
MentalEdge@sopuli.xyz 3 months ago
Is language conscious? Is it possible to “encode” human thinking into the media we produce?
Humans certainly “decode” ideas, knowledge, trains of logic and more from media, but does that mean the media contains the components of consciousness?
Is it possible to produce a machine that “decodes” not the content of media, but the process through which it was produced? Does tmedia contain the latter in the first place?
How can you tell the difference if it does?
The more I learn about how modern machine learning actually works, the more certain I become that even if having a machine “decode” human media is the path to AGI, LLMs ain’t it.
NounsAndWords@lemmy.world 3 months ago
Are atoms?
I don’t know if LLMs of a large enough size can achieve (or sufficiently emulate) consciousness, but I do know that we barely know anything about consciousness, let alone it’s limits.
mke@lemmy.world 3 months ago
Saying “we don’t know, and it’s complicated, therefore there’s a chance, maybe, depending” is not an argument.