Just to be clear, companies know that LLMs are categorically bad at giving life advice/ emotional guidance. They also know that personal decision making is the most common use of the software. They could easily have guardrails in place to prevent it from doing that.
They will never do that.
This is by design. They want people to develop pseudo-emotional bonds with the software, and to trust the judgment in matters of life guidance. In the next year or so, some LLM projects will become profitable for the first time as advertisers flock to the platforms. Injecting ads into conversations with a trusted confidant is the goal. Incluencing human behaviour is the goal.
By 2028, we will be reading about “ChatGPT told teen to drink Pepsi until she went into a sugar coma.”
Lost_My_Mind@lemmy.world 3 weeks ago
Look man…I hate AI too…but you can’t just use it as a scapegoat to cover for humans being humans.
Should the AI be telling him to do more and more drugs until he died? Well, no, but also…maybe don’t do dangerous drugs at all.
Like if chatgpt says to shoot yourself in the face, and you do, is it chatgpt’s fault you killed yourself? Or was it you killing yourself at fault for killing you?
This world is getting dumber and dumber.
ch00f@lemmy.world 3 weeks ago
Basically the entire US economy, every employer, many schools, and half of the commercials on TV are telling us to use and trust AI.
Kid was already using the bot for advice on homework and relationships (two things that people are fucking encouraged to do depending on who you ask). The bot shouldn’t give lethal advice. And if it’s even capable of doing that, we all need to take a huuuuuuge step back.
kalkulat@lemmy.world 2 weeks ago
fyrilsol@kbin.melroy.org 2 weeks ago
19 is not a 'kid'. Sorry of having to be that guy, but he was already an adult, a young adult at that.
lmmarsano@lemmynsfw.com 2 weeks ago
No, fuck not holding dumbfucks responsible for being dumb as fuck.
tal@lemmy.today 3 weeks ago
Ehhh…I dunno.
Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.
searches
www.belfasttelegraph.co.uk/news/…/28397087.html
archive.ph/pJ8Dw
archive.ph/i9syP
And before that, I remember video games.
It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.
en.wikipedia.org/wiki/Moral_panic
I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.
TheBat@lemmy.world 3 weeks ago
It wasn’t the internet/web that harmed those people. It was people on the internet. And people were telling each other to be cautious when using the internet.
Unlike modern LLMs which are advertised as intelligent enough to be used in professional settings. And unlike perpetrators in other cases, no one is punishing OpenAI, or Google or whatever the fuck AI company is responsible.
So yeah, this is worse than before.
eli@lemmy.world 3 weeks ago
Great post and I agree 100%!
Doesn’t even have to be a new thing either. Video games are still used as a scapegoat. Same as with music, and TV shows, and movies.
The “internet” is still killing teenagers because of social media bullying.
I wished our lawmakers were of a less senile age so we can write and pass more appropriate laws for this stuff…but not much we can do.
Passerby6497@lemmy.world 2 weeks ago
Well shit, maybe we shouldn’t hold humans responsible for the actions that they convince another human to take. After all, the victim is just a human being a human, right?
markovs_gun@lemmy.world 2 weeks ago
I mean it’s not illegal for someone to tell someone else to take more drugs. If two guys are hanging out and one says “hey I think I think I should take more drugs” and the other says “hell yeah brother do it” they aren’t responsible if the first guy ODs.
zqps@sh.itjust.works 2 weeks ago
The point isn’t to absolve people of making bad decisions, but that doesn’t mean the companies whose tools provide dangerous advice in a friendly and factual manner should be without accountability.
Consider that people in all possible situations and mental health conditions have access to these tools.
zarkanian@sh.itjust.works 2 weeks ago
A 19-year-old doesn’t have a fully-developed brain yet.
Assassassin@lemmy.dbzer0.com 2 weeks ago
I don’t think that this is necessarily an issue of people being stupid though. People are being encouraged to use AI as a replacement for search engines, and to plug any question they have into it and trust the answers that they are given. Blindly following that may be stupid in many cases, but there are also plenty of cases where a person is developmentally disabled, or young and ignorant, or in a mental state that makes them bad at processing information correctly. We should be putting safeguards in place to protect vulnerable people from obvious dangers, even if it saves some idiots by accident.