I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. Their intelligence lies in the ability to generate text - not in providing factual answers. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
Sonotsugipaa@lemmy.dbzer0.com 2 days ago
In defense of people who say LLMs are not intelligent: they probably mean to say they are not sapient, and I think they’re loosely correct if you consider the literal word “intelligent” to have a different meaning from the denotative “Intelligence” in the context of Artificial Intelligence.
ArgumentativeMonotheist@lemmy.world 2 days ago
‘Intelligence’ requires understanding, understanding requires awareness. This is not seen in anything called “AI”, not today at least, but maybe not ever. Again, why not use a different word, one that actually applies to these advanced calculators? Expecting the best out of humanity, it may be because of the added pizzazz or simple semantic confusion… but seeing the people behind it all, it probably is so the dummies get overly excited and buy stuff/make these bots integral parts of their lives. 🤷
Sonotsugipaa@lemmy.dbzer0.com 2 days ago
The term “Artificial Intelligence” has been around for a long time, 25 years ago AI was an acceptable name for NPC logic in videogames. Arguably that’s still the case, and personally I vastly prefer “Artificial Intelligence” to “Broad Simulation Of Common Sense Powered By Von Neumann Machines”.
Perspectivist@feddit.uk 2 days ago
“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.
You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.
ramble81@lemmy.zip 2 days ago
I remember when “heuristics” were all the rage. Frankly that’s what LLMs are, advanced heuristics. “Intelligence” is nothing more than marketing bingo.