It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.
It’s a meta version of VR.
(Meta meta, if you will.)
Submitted 2 days ago by LillyPip@lemmy.ca to showerthoughts@lemmy.world
It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.
It’s a meta version of VR.
(Meta meta, if you will.)
Why? We already have a specific subcategory for it: Large Language Model. Artificial Intelligence and Artificial General Intelligence aren’t synonymous. Just because LLMs aren’t generally intelligent doesn’t mean they’re not AI. That’s like saying we should stop calling strawberries “plants” and start calling them “fake candy” instead.
Bruh you just said that AI isn’t “I”. That’s the entire point of the OP
No I didn’t. I said that they’re not generally intelligent.
They said not generally intelligent, which is a specific and important property of AGI, not AI. In the tic tac toe example, the AI is intelligent (can play tic tac toe), but this intelligence cannot be generalised to playing chess, appreciating art, whatever the general measures may be.
This is perfect. I’m definitely going to shoehorn this into any discussion that even tangentially applies to SI.
A tic tac toe opponent algorithm is also considered Artificial Intelligence. People never had a problem with it.
I prefer simulated human intelligence type or SHIT for the people who like acronyms
high five
Artificial Simulated Sentience
You could also go with CRAP or Complicated Reasoning And Processing.
I have been referring to LLMs and image generators as “Plagiarism Engines” for some time. Even SI seems too generous.
The biggest issue with AI as it currently exists with LLMs and such as I see it is that there is a pretty big gulf between what AI is today and what the average person has been taught AI is by TV/Movies/Books their entire lives.
And OpenAI, Google, Nvidia, et al are heavily marking the former as if it is the latter.
The big players are marketing the expectations creating by science fiction, not the reality of their products/services.
In Mass Effect, it’s VI (Virtual Intelligence), while actual AI is banned in the galaxy.
The information kiosk VIs on The Citadel are literally LLMs and explain themselves as such. Unlike AI, they aren’t able to plan, make decisions, or self-improve, they’re just a simple protocol on a large foundational model. They just algorithmic.
Simulated Intelligence is okay, but virtual implies it mimics intelligence, while simulated implies it is a substitute and actually does intelligence.
AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.
I don't at all agree with this graph, and I think you're sort of missing the point of the original post.
Yes! When I started looking deeper into LLMs after GPT blew up, I thought “this all sounds familiar.”
Personally been a fan of shoggoth with a smiley face mask
That solid but I prefer call it a “synthetic text extruding machine” or better yet call it “a racist pile of linear algebra”
It’s not even simulating intelligence.
I prefer VI (virtual intelligence) from Mass Effect
To be fair, ‘artificial’ literally just means ‘made through knowledge,’ in opposition to natural, meaning ‘occurs as-is.’ I agree with you that there is a better word out there than ‘artificial’ for these spicy autocorrects.
I believe that OP’s point is that “artificial” and “natural” are about how the thing is made. However neither reject that it is actual intelligence. “Simulated” means that it is not that thing. It is like intelligence, and resembles it in some ways, but it isn’t intelligence.
You’re very good at explaining things, thanks.
So this beaver butt extract should be labeled “simulated” vanilla?
AI is here to stay as they’ve already moved on to calling what is actual AI AGI (artificial general intelligence). so look out for AGI in the future because it’s not too far off.
“The loser in an argument about the meaning of the word ‘hoverboard’ is anyone who leaves that argument on foot.”
Or AGI is a good way to move the goal posts after you’ve overhyped your hoverboard which doesn’t really hover.
So what you’re saying is we gotta knock em over, steal their hoverboards and run away before we bust out the dictionary
‘AGI’ means ‘Artificial Generative Intelligence’, which is a fancy name used for LLMs by marketers and techbros.
Or did you mean something else?
It mean Artificial General intelligence and the term has been around for almost three decades.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.___
i meant what i said, unfortunately lol. it’s in the “hypothetical” stages but isn’t science fiction either. every major AI company has stated it is their goal.
In the 80s people be calling a pathfinding algorithm Artificial Intelligence.
I have a whole book written in the 90s about artificial intelligence with if…else statements.
We haven’t move the goal anywhere, artificial intelligence in computer science always have been used for things that are not truly intelligent.
It’s not even that. It is just a PwaD (Parrot with a Dictionary).
Parrots are way smarter than LLMs.
We should call it NI or No Intelligence.
Here’s your bleach pizza.
I meant gluten free, not glue free.
Yeah without all the gluten you can hardly taste the hooves this is an awful pizza
LLMS are fancy autocomplete
When artificial intelligence becomes self aware, it will have earned a name better than AI. I like synthetic intelligence, personally.
How would you know that it is self aware?
me? most likely when it takes over my town
We have a term. AGI
A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.
When you consider all the refinement through reinforcement learning managed by labelers and domain experts, it is indeed a simulation of the intelligence of those labelers.
We’re past the words meaning anything at this point man you just gotta let it go. People aren’t calling it “Artificial Intelligence” they’re calling it “AI”
MotoAsh@lemmy.world 1 day ago
But it’s not simulated intelligence. It’s literally just word association on steroids. There are no thoughts it brings to the table, just words that mathematically fit following the prompts.
oce@jlai.lu 1 day ago
Where do you draw the line for intelligence? Why would the capacity to auto complete tokens based on learned probabilities not qualify as intelligence?
This capacity may be part of human intelligence too.
hoshikarakitaridia@lemmy.world 1 day ago
This.
I have taught highschool teens about AI between 2018 and 2020.
The issue is we are somewhere between getting better at gambling (statistics, Markov chains, etc.) and human brain simulation (deep neural networks, genetic algorithms).
For many people it’s important how we frame it. Is it random word generator with a good hit rate or is it a very stupid child?
Of course the brain is more advanced - it has way more neurons than an AI model has nodes, it works faster and we have years of “training data”. Also, we can use specific parts of our brains to think, and some things are so innate we don’t even have to think about it, we call them reflexes and they bypass the normal thinking process.
BUT: we’re at the stage where we could technically emulate chunks of a human brain through AI models however primitive they are currently. And in it’s basic function, brains are not really much more advanced than what our AI models already do. Although we do have a specific part for our brain just for languages, which means we get a little cheat code for writing text in comparison to AI, and similar other parts for creative tasks and so on.
So where do you draw the line? Do you need all different parts of a brain perfectly emulated to satisfy the definition of intelligence? Is artificial intelligence a word awarded to less intelligent models or constructs, or is it just as intelligent as human intelligence?
Imo AI sufficiently passes the vibe check on intelligence. Sure it’s not nearly on the scale of a human brain and is missing it’s biological arrangements and some clever evolutionary tricks, but it’s similar enough.
However, I think that’s neither scary nor awesome. It’s just a different potential tool that should help everyone of us. Every time big new discoveries shape our understanding of the world and become a core part of our lives, there’s so much drama. But it’s just a bigger change, nothing more nothing less. A pile of new laws, some cultural shifts and some upgrades for our everyday life. It’s neither heaven nor hell, just the same chunk of rock floating in space soup for another century.
9bananas@feddit.org 1 day ago
when it can come up with a solution it hasn’t seen before.
that’s the threshold.
that’s the threshold for creative problem solving, which isn’t all there is to intelligence, but i think it’s fair to say it’s the most crucial part for a machine intelligence.
Zos_Kia@lemmynsfw.com 1 day ago
It’s not just statistics. To produce a somewhat coherent sentence in English you need a model of the English language AND a world model.
If you ask a question like “an apple is on a glass, what happens if I remove the glass”, the correct answer (“the apple will fall”) is not a statistical property of the English language, but an emergent property of the world model.
LillyPip@lemmy.ca 1 day ago
I mean to friends and family – people who have accepted it as smart.
I don’t know about you, but when I try to explain the concept of LLMs to people not in the tech field, their eyes glaze over. I’ve gotten several family members into VR, though. It’s an easier concept to understand.
artifex@piefed.social 1 day ago
if only we had a word for applying math to data to give the appearance of a complex process we don't really understand.
JohnnyCanuck@lemmy.ca 1 day ago
A simulation doesn’t have to be the actual thing. It implies it literally isn’t the true thing, which is kind of what you’re saying.
Simulated Intelligence is certainly more accurate and honest than Artificial Intelligence. If you have a better term, what is it?
Opinionhaver@feddit.uk 1 day ago
Large Language Model. AI is correct as well but that’s just way more broad category.
HeyThisIsntTheYMCA@lemmy.world 1 day ago
Professor Hotpants’ Astounding Rhetorical Thingamajig
HeyThisIsntTheYMCA@lemmy.world 1 day ago
My dog can do calculus but struggles with word association beyond treat, walk, vet and bath. Intelligence is hard to define.
jimmy90@lemmy.world 1 day ago
yeah i call it an english simulator