Comment on agi graph slop, wtf does goverment collapse have to do with ai?
yeahiknow3@lemmings.world 5 days agoThe only way to create AGI is by accident. I can’t stress how much we haven’t the first clue how consciousness works. I don’t mean we are far, I mean we are roughly at the starting point, with a variety of vague abstract theories with no connection to empirical reality. We can’t even agree on whether insects have emotions (they don’t — unless you think they do, in which case fight me) let alone explain subjective experience.
communist@lemmy.frozeninferno.xyz 4 days ago
Consciousness is entirely overrated, it doesn’t mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.
yeahiknow3@lemmings.world 4 days ago
Reasoning literally requires consciousness because it’s a fundamentally normative process. But hey, I get it. This is your first time encountering this fascinating topic and you’re a little confused. It’s okay.
postmateDumbass@lemmy.world 4 days ago
Reasoning is approximated enough with matrix math and filter algorithms.
It can fly drones, dodge wrenches.
The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.
yeahiknow3@lemmings.world 4 days ago
This is such an odd response. Yes, we can create the illusion of thought by executing very complicated instructions. Who cares? That’s not what anyone is talking about. There’s a difference between a machine that does what it’s told and one that thinks for itself. The latter cannot be done at the moment, because we don’t know how. But sure, we can have cheap parlor tricks. Good enough to amuse the sub-100 IQ crowd at least.
communist@lemmy.frozeninferno.xyz 4 days ago
A philosophical zombie still gets its work done, I fundamentally disagree.
yeahiknow3@lemmings.world 4 days ago
That’s fine, but most people aren’t interested in an illusion or a magic trick. When they say AGI, they mean an actual thinking mind capable of rationality (such as mind would be sensitive and responsive to reasons).
Calculators, LLMs, and toasters can’t think or understand or undertake rational (let alone moral) deliberation by definition. They can only do what they’re told. We don’t need more machines that do what they’re told. We want machines that can think and understand for themselves. Like human minds, but more powerful. That would require subjective understanding that cannot be programmed by definition. For more details, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone program human intuitions about the world at large. Even if it’s possible we simply don’t know how.