Exactly. It’s a language learning and text output machine. It doesn’t know anything, its only ability is to output realistic sounding sentences based on input, and will happily and confidently spout misinformation as if it is fact because it can’t know what is or isn’t correct.
Comment on this AI thing
ieightpi@lemmy.world 11 months ago
Can we stop calling this shit AI? It has no intelligence
Snowpix@lemmy.ca 11 months ago
QuaternionsRock@lemmy.world 11 months ago
it’s a learning machine
Should probably use a more careful choice of words if you want to get hung up on semantic arguments
caseyweederman@lemmy.ca 11 months ago
Sounds pretty much identical to human beings to me
marzhall@lemmy.world 11 months ago
Lol, the AI effect in practice - the minute a computer can do it, it’s no longer intelligence.
A year ago if you had told me you had a computer program that could write greentexts compellingly, I would have told you that required “true” AI. But now, eh.
In any case, LLMs are clearly short of the “SuPeR BeInG” that the term “AI” seems to make some people think of and that you get all these Boomer stories about, and what we’ve got now definitely isn’t that.
EatYouWell@lemmy.world 11 months ago
The AI effect can’t be a real thing since true AI hasn’t been done yet. We’re getting closer, but we’re definitely not in the positronic brain stage yet.
ignotum@lemmy.world 11 months ago
“true AI”
AI is just “artificial intelligence”, there are no strict criterias defining what is “true” AI and not,
Do the LLM models show an ability to reason and problem solve? Yes
Are they perfect? No
So what?
Ironically your comment sounds like yet another example of the AI effect
Siegfried@lemmy.world 11 months ago
Mass effects lore differences between virtual intelligence and artificial intelligence, the first one is programmed to do shit and say things nicely, the second one understands enough to be a menace to civilization… always wondered if this distinction was actually accepted outside the game.
*Terms could be mixed up cause I played in German (VI and KI)
EnderMB@lemmy.world 11 months ago
That’s why we preface it with Artificial.
BluesF@feddit.uk 11 months ago
But it isn’t artificial intelligence. It isn’t even an attempt to make artificial “intelligence”. It is artificial talking. Or artificial writing.
EnderMB@lemmy.world 11 months ago
In that case I’m not really sure what you’re expecting from AI, without getting into the philosophical debate of what intelligence is. Most modern AI systems are in essence taking large datasets and regurgitating the most relevant data back in a relevant form.
might_steal_your_cat@lemm.ee 11 months ago
There are many definitions of AI (eg. there is some mathematical model used), but machine learning (which is used in the large language models) is considered a part of the scientific field called AI. If someone says that something is AI, it usually means that some technique from the field AI has been applied there. Even though the term AI doesn’t have much to do with the term intelligence as most of the people perceive it, I think the usage here is correct. (And yes, the whole scientific field should have been called differently.)
Klear@lemmy.world 11 months ago
I will continue calling it “shit AI”.
WindowsEnjoyer@sh.itjust.works 11 months ago
It’s artificial.
regbin_@lemmy.world 11 months ago
This is what AI actually is. Not the super-intelligent “AI” that you see in movies, those are fiction.
The NPC you see in video games with a few branches of if-else statements? Yeah that’s AI too.
Willer@lemmy.world 11 months ago
No companies are only just now realizing how powerful it is and are throttling the shit out of its capabilities to sell it to you later :)
Marzepansion@programming.dev 11 months ago
“we purposefully make it terrible, because we know it’s actually better” is near to conspiracy theory level thinking.
The internal models they are working on might be better, but they are definitely not making their actual product that’s publicly available right now shittier. It’s exactly the thing they released, and this is its current limitations.
This has always been the type of output it would give you, we even gave it a term really early on, hallucinations. The only thing that has changed is that the novelty has worn off so you are now paying a bit more attention to it, it’s not a shittier product, you’re just not enthralled by it anymore.
UndercoverUlrikHD@programming.dev 11 months ago
Researchers have shown that the performance of the public GPT models have decreased, likely due to OpenAI trying to optimise energy efficiency and adding filters to what they can say.
I don’t really care about why it, so I won’t speculate, but let’s not pretend the publicly available models aren’t purposefully getting restricted either.