Comment on agi graph slop, wtf does goverment collapse have to do with ai?
communist@lemmy.frozeninferno.xyz 2 days agoConsciousness is entirely overrated, it doesn’t mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.
yeahiknow3@lemmings.world 2 days ago
Reasoning literally requires consciousness because it’s a fundamentally normative process. But hey, I get it. This is your first time encountering this fascinating topic and you’re a little confused. It’s okay.
postmateDumbass@lemmy.world 2 days ago
Reasoning is approximated enough with matrix math and filter algorithms.
It can fly drones, dodge wrenches.
The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.
yeahiknow3@lemmings.world 2 days ago
This is such an odd response. Yes, we can create the illusion of thought by executing very complicated instructions. Who cares? That’s not what anyone is talking about. There’s a difference between a machine that does what it’s told and one that thinks for itself. The latter cannot be done at the moment, because we don’t know how. But sure, we can have cheap parlor tricks. Good enough to amuse the sub-100 IQ crowd at least.
communist@lemmy.frozeninferno.xyz 2 days ago
Being able to decide its own goals is a completely unimportant aspect of the problem.
why do you care?
communist@lemmy.frozeninferno.xyz 2 days ago
A philosophical zombie still gets its work done, I fundamentally disagree.
yeahiknow3@lemmings.world 2 days ago
That’s fine, but most people aren’t interested in an illusion or a magic trick. When they say AGI, they mean an actual thinking mind capable of rationality (such as mind would be sensitive and responsive to reasons).
Calculators, LLMs, and toasters can’t think or understand or undertake rational (let alone moral) deliberation by definition. They can only do what they’re told. We don’t need more machines that do what they’re told. We want machines that can think and understand for themselves. Like human minds, but more powerful. That would require subjective understanding that cannot be programmed by definition. For more details, see Gödel’s incompleteness theorems. We can’t even axiomatize mathematics, let alone program human intuitions about the world at large. Even if it’s possible we simply don’t know how.
communist@lemmy.frozeninferno.xyz 2 days ago
If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go “ah but it’s not really reasoning.”
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say “but who would want yet another machine that just does what we say?”
your point reads like complete nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?