That’s not quite right, the discussion of consciousness, mind, and reasoning are all relevant and have been in the philosophy of artificial intelligence for hundreds of years. You are valid to call it AI within your definitions but those are not exactly agreed on, such as whether you ascribe to Alan Turing or John Searle, for example
Comment on YSK that "AI" in itself is highly unspecific term
Perspectivist@feddit.uk 3 days agoConsciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.
Poxlox@lemmy.world 3 days ago
EvilEdgelord@sh.itjust.works 3 days ago
It’s not really solving problems or learning patterns now, is it? I don’t see it getting past any captchas or answering health questions accurately, so we’re definitely not there.
Perspectivist@feddit.uk 3 days ago
If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.
The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You’re blaming the hammer for not turning screws.
EvilEdgelord@sh.itjust.works 3 days ago
Fair point 😅