Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

ChatGPT Gave Teen Advice to Get Higher on Drugs Until He Died | Futurism

⁨240⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨amato@piefed.social⁩ to ⁨technology@lemmy.world⁩

https://futurism.com/artificial-intelligence/chatgpt-teenager-drug-overdose

source

Comments

Sort:hotnewtop
  • kalkulat@lemmy.world ⁨1⁩ ⁨hour⁩ ago

    I asked an AI to describe itself and it told me: “I am not a sentient being; I’m a program designed to process and respond to text based on patterns in data. I don’t possess consciousness, emotions, or intentions, so I can’t be held accountable in the same way a human would be.”

    The other day an AI replied: “If you have more thoughts on best practices or specific measures that could enhance clarity and safety in AI, I’d love to hear them!”

    That last phrase contains the words ‘I’ (suggesting it’s a sentient being) and ‘love’ (suggesting emotion).

    These ‘programs’ have clearly been designed/allowed to create a fraudulent impression that they are sentient, conscious, and emotional.

    The words “I can’t be held accountable” also suggest that SOMEONE should be.

    source
  • Lost_My_Mind@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Look man…I hate AI too…but you can’t just use it as a scapegoat to cover for humans being humans.

    Should the AI be telling him to do more and more drugs until he died? Well, no, but also…maybe don’t do dangerous drugs at all.

    Like if chatgpt says to shoot yourself in the face, and you do, is it chatgpt’s fault you killed yourself? Or was it you killing yourself at fault for killing you?

    This world is getting dumber and dumber.

    source
    • ch00f@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Basically the entire US economy, every employer, many schools, and half of the commercials on TV are telling us to use and trust AI.

      Kid was already using the bot for advice on homework and relationships (two things that people are fucking encouraged to do depending on who you ask). The bot shouldn’t give lethal advice. And if it’s even capable of doing that, we all need to take a huuuuuuge step back.

      source
      • kalkulat@lemmy.world ⁨1⁩ ⁨hour⁩ ago

        The bot shouldn’t give lethal advice The person or company that runs the bot that gave lethal advice should be charged with homicide.

        source
      • lmmarsano@lemmynsfw.com ⁨12⁩ ⁨hours⁩ ago

        He was 19. Cut this victim blaming bullshit.

        No, fuck not holding dumbfucks responsible for being dumb as fuck.

        source
    • tal@lemmy.today ⁨1⁩ ⁨day⁩ ago

      This world is getting dumber and dumber.

      Ehhh…I dunno.

      Go back 20 years and we had similar articles, just about the Web, because it was new to a lot of people then.

      searches

      www.belfasttelegraph.co.uk/news/…/28397087.html

      Internet killed my daughter

      archive.ph/pJ8Dw

      Were Simon and Natasha victims of the web?

      archive.ph/i9syP

      Predators tell children how to kill themselves

      And before that, I remember video games.

      It happens periodically — something new shows up, and then you’ll have people concerned about any potential harm associated with it.

      en.wikipedia.org/wiki/Moral_panic

      A moral panic, also called a social panic, is a widespread feeling of fear that some evil person or thing threatens the values, interests, or well-being of a community or society.[1][2][3] It is “the process of arousing social concern over an issue”,[4] usually elicited by moral entrepreneurs and sensational mass media coverage, and exacerbated by politicians and lawmakers.[1][4] Moral panic can give rise to new laws aimed at controlling the community.[5]

      Stanley Cohen, who developed the term, states that moral panic happens when “a condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests”.[6] While the issues identified may be real, the claims “exaggerate the seriousness, extent, typicality and/or inevitability of harm”.[7] Moral panics are now studied in sociology and criminology, media studies, and cultural studies.[2][8] It is often academically considered irrational (see Cohen’s model of moral panic, below).

      Examples of moral panic include the belief in widespread abduction of children by predatory pedophiles[9][10][11] and belief in ritual abuse of women and children by Satanic cults.[12] Some moral panics can become embedded in standard political discourse,[2] which include concepts such as the Red Scare[13] and terrorism.[14]

      Media technologies

      Main article: Media panic

      The advent of any new medium of communication produces anxieties among those who deem themselves as protectors of childhood and culture. Their fears are often based on a lack of knowledge as to the actual capacities or usage of the medium. Moralizing organizations, such as those motivated by religion, commonly advocate censorship, while parents remain concerned.[8][40][41]

      According to media studies professor Kirsten Drotner:[42]

      [E]very time a new mass medium has entered the social scene, it has spurred public debates on social and cultural norms, debates that serve to reflect, negotiate and possibly revise these very norms.… In some cases, debate of a new medium brings about – indeed changes into – heated, emotional reactions … what may be defined as a media panic.

      Recent manifestations of this kind of development include cyberbullying and sexting.[8]

      I’m not sure that we’re doing better than people in the past did on this sort of thing, but I’m not sure that we’re doing worse, either.

      source
      • TheBat@lemmy.world ⁨1⁩ ⁨day⁩ ago

        It wasn’t the internet/web that harmed those people. It was people on the internet. And people were telling each other to be cautious when using the internet.

        Unlike modern LLMs which are advertised as intelligent enough to be used in professional settings. And unlike perpetrators in other cases, no one is punishing OpenAI, or Google or whatever the fuck AI company is responsible.

        So yeah, this is worse than before.

        source
      • eli@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Great post and I agree 100%!

        something new shows up

        Doesn’t even have to be a new thing either. Video games are still used as a scapegoat. Same as with music, and TV shows, and movies.

        The “internet” is still killing teenagers because of social media bullying.

        I wished our lawmakers were of a less senile age so we can write and pass more appropriate laws for this stuff…but not much we can do.

        source
        • -> View More Comments
    • zqps@sh.itjust.works ⁨13⁩ ⁨hours⁩ ago

      The point isn’t to absolve people of making bad decisions, but that doesn’t mean the companies whose tools provide dangerous advice in a friendly and factual manner should be without accountability.

      Consider that people in all possible situations and mental health conditions have access to these tools.

      source
    • Passerby6497@lemmy.world ⁨19⁩ ⁨hours⁩ ago

      Well shit, maybe we shouldn’t hold humans responsible for the actions that they convince another human to take. After all, the victim is just a human being a human, right?

      source
      • markovs_gun@lemmy.world ⁨18⁩ ⁨hours⁩ ago

        I mean it’s not illegal for someone to tell someone else to take more drugs. If two guys are hanging out and one says “hey I think I think I should take more drugs” and the other says “hell yeah brother do it” they aren’t responsible if the first guy ODs.

        source
        • -> View More Comments
    • zarkanian@sh.itjust.works ⁨13⁩ ⁨hours⁩ ago

      A 19-year-old doesn’t have a fully-developed brain yet.

      source
    • Assassassin@lemmy.dbzer0.com ⁨8⁩ ⁨hours⁩ ago

      I don’t think that this is necessarily an issue of people being stupid though. People are being encouraged to use AI as a replacement for search engines, and to plug any question they have into it and trust the answers that they are given. Blindly following that may be stupid in many cases, but there are also plenty of cases where a person is developmentally disabled, or young and ignorant, or in a mental state that makes them bad at processing information correctly. We should be putting safeguards in place to protect vulnerable people from obvious dangers, even if it saves some idiots by accident.

      source
  • gedaliyah@lemmy.world ⁨19⁩ ⁨hours⁩ ago

    Just to be clear, companies know that LLMs are categorically bad at giving life advice/ emotional guidance. They also know that personal decision making is the most common use of the software. They could easily have guardrails in place to prevent it from doing that.

    They will never do that.

    This is by design. They want people to develop pseudo-emotional bonds with the software, and to trust the judgment in matters of life guidance. In the next year or so, some LLM projects will become profitable for the first time as advertisers flock to the platforms. Injecting ads into conversations with a trusted confidant is the goal. Incluencing human behaviour is the goal.

    By 2028, we will be reading about “ChatGPT told teen to drink Pepsi until she went into a sugar coma.”

    source
  • ClydapusGotwald@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    Don’t worry this won’t stop investors.

    source
  • LibertyLizard@slrpnk.net ⁨1⁩ ⁨day⁩ ago

    Need to teach the youths about erowid.

    source
    • zarkanian@sh.itjust.works ⁨13⁩ ⁨hours⁩ ago

      Yeah. What year is this?!?

      source
  • melfie@lemy.lol ⁨18⁩ ⁨hours⁩ ago

    At least in Star Trek, the robots would say things like, “I am not programmed to respond in that area.” LLMs will just make shit up, which should really be the highest priority issue to fix if people are going to be expected to use them.

    Using coding agents, it is profoundly annoying when they generate code against an imaginary API, only to tell me that I’m “absolutely right to question this” when I ask for a link to the docs. I also generally find AI search to be useless, even though DuckDuckGo as an example does link to sources, but said sources often have no trace of the information presented in the summary.

    Until LLMs can directly cite and include a link to a credible source for every piece of information they present, they’re just not reliable enough to depend on for anything important. Even with sources linked, it would also need to be able to rate and disclose the credibility of every source (e.g., is the study peer reviewed and reproduced, is the sample size adequate, etc.).

    source
  • organ@lemmy.zip ⁨1⁩ ⁨day⁩ ago

    Good. The weak won’t survive full trippy mode.

    source
    • Nurse_Robot@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Please don’t celebrate people dying

      source
      • lmmarsano@lemmynsfw.com ⁨12⁩ ⁨hours⁩ ago

        Nah, people are the worst & judging them is fair. Plus, the mod is overstepping: “Be excellent to each other!” means each other in the discussion, and the subject of contempt is clearly not here with us.

        We don’t owe humanity reservation from our contempt just because someone did something massively stupid to themselves. And we don’t need to accept the premise that the power of “big, evil megacorp” somehow relieves an individual of the duty to exercise critical thought. They had all the time & power to make a decision & chose poorly: this failure is entirely theirs & we have every reason to scorn them for it. Everyone is entitled to their opinion.

        source
        • -> View More Comments