When we started calling literal linear regression models AI it lost all value.
Comment on New study sheds light on ChatGPT’s alarming interactions with teens
Perspectivist@feddit.uk 10 hours agoAI is an extremely broad term which LLMs fall under. You may avoid calling it that but it’s the correct term nevertheless.
panda_abyss@lemmy.ca 9 hours ago
Perspectivist@feddit.uk 6 hours ago
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
panda_abyss@lemmy.ca 6 hours ago
I agree, but people have been heavily misusing it since like 2018
Mondez@lemdro.id 10 hours ago
I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.
Perspectivist@feddit.uk 9 hours ago
The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
Strider@lemmy.world 10 hours ago
I am aware, but still I don’t agree.
History will tell later who was ‘correct’, if we make it that far.
Perspectivist@feddit.uk 9 hours ago
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s a system capable of performing a cognitive task that’s normally done by humans: generating language.
Strider@lemmy.world 7 hours ago
Everything. As we as humanity learn more we recognize errors or wisdom with standing the test of time.
We could go into the definition of intelligence, but it’s just not worth it.
We can just disagree and that’s fine.
Perspectivist@feddit.uk 6 hours ago
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
RobotZap10000@feddit.nl 10 hours ago
If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.