Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
Comment on New study sheds light on ChatGPT’s alarming interactions with teens
TheReanuKeeves@lemmy.world 3 weeks ago
Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago
morto@piefed.social 3 weeks ago
TheReanuKeeves@lemmy.world 2 weeks ago
I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign
postmateDumbass@lemmy.world 2 weeks ago
Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.
TheMonk@lemmings.world 2 weeks ago
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”
DrFistington@lemmy.world 3 weeks ago
Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you
TheReanuKeeves@lemmy.world 2 weeks ago
Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn’t the point. The point is, if someone wants to find that information, it’s been available for decades. Youtube and and Google results are personalized, look it up.
Strider@lemmy.world 3 weeks ago
I get your point but yes, being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.
Tracaine@lemmy.world 3 weeks ago
No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.
Strider@lemmy.world 3 weeks ago
I was just starting reading getting angry but then… I… I have seen it. I will follow. Bless you and gpt!!
Perspectivist@feddit.uk 3 weeks ago
AI is an extremely broad term which LLMs fall under. You may avoid calling it that but it’s the correct term nevertheless.
panda_abyss@lemmy.ca 3 weeks ago
When we started calling literal linear regression models AI it lost all value.
Perspectivist@feddit.uk 3 weeks ago
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
RobotZap10000@feddit.nl 3 weeks ago
If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.
Mondez@lemdro.id 3 weeks ago
I guess so, but then that is kind of lumping it in the fps bot behaviour from the 90s which was also “AI”, it’s the AI hype that is pushing people to think of it as “intelligent but not organic” instead of “algorithms that give the facade of intelligence” which 90s kids would have understood it to be.
Perspectivist@feddit.uk 3 weeks ago
The chess opponent on Atari is AI too. I think the issue is that when most people hear “intelligence,” they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
Strider@lemmy.world 3 weeks ago
I am aware, but still I don’t agree.
History will tell later who was ‘correct’, if we make it that far.
Perspectivist@feddit.uk 3 weeks ago
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s a system capable of performing a cognitive task that’s normally done by humans: generating language.
baines@lemmy.cafe 2 weeks ago
only because marketing has shit all over the term
yetAnotherUser@discuss.tchncs.de 2 weeks ago
AI was never more than algorithms which could be argued to have some semblance of intelligence somewhere. It’s sole purpose was marketing by scientists to get funding.
Since the 60s everything related to neural networks is classified as AI. LLMs are neural networks, therefore they fall under the same label.