Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

New study sheds light on ChatGPT’s alarming interactions with teens

⁨104⁩ ⁨likes⁩

Submitted ⁨⁨11⁩ ⁨hours⁩ ago⁩ by ⁨Davriellelouna@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://apnews.com/article/chatgpt-study-harmful-advice-teens-c569cddf28f1f33b36c692428c2191d4

source

Comments

Sort:hotnewtop
  • TheReanuKeeves@lemmy.world ⁨11⁩ ⁨hours⁩ ago

    Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago

    source
    • morto@piefed.social ⁨4⁩ ⁨hours⁩ ago

      Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.

      source
      • TheReanuKeeves@lemmy.world ⁨3⁩ ⁨hours⁩ ago

        I think we need a built in safety for people who actually develop an emotional relationship with AI because that’s not a healthy sign

        source
    • Strider@lemmy.world ⁨10⁩ ⁨hours⁩ ago

      I get your point but yes, being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.

      source
      • Tracaine@lemmy.world ⁨9⁩ ⁨hours⁩ ago

        No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.

        source
        • -> View More Comments
      • Perspectivist@feddit.uk ⁨8⁩ ⁨hours⁩ ago

        AI is an extremely broad term which LLMs fall under. You may avoid calling it that but it’s the correct term nevertheless.

        source
        • -> View More Comments
    • DrFistington@lemmy.world ⁨7⁩ ⁨hours⁩ ago

      Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.

      Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you

      source
      • TheReanuKeeves@lemmy.world ⁨3⁩ ⁨hours⁩ ago

        Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn’t the point. The point is, if someone wants to find that information, it’s been available for decades. Youtube and and Google results are personalized, look it up.

        source
  • franzcoz@feddit.cl ⁨3⁩ ⁨hours⁩ ago

    I have noted that least ChatGPT models are way more susceptible to users “deception” or convincing to answer problematic questions than other models like Claude or even previous ChatGPT models. So I think this “behaviour” is itentional

    source
  • Sidhean@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    Haha I sure am glad this technology is being pushed on everyone all the time haha

    source
  • Grimy@lemmy.world ⁨7⁩ ⁨hours⁩ ago

    We need yo censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.

    source
  • Anarki_@lemmy.blahaj.zone ⁨8⁩ ⁨hours⁩ ago

    ⢀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⣠⣤⣶⣶ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⢰⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣾⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⡏⠉⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⣿ ⣿⣿⣿⣿⣿⣿⠀⠀⠀⠈⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠉⠁⠀⣿ ⣿⣿⣿⣿⣿⣿⣧⡀⠀⠀⠀⠀⠙⠿⠿⠿⠻⠿⠿⠟⠿⠛⠉⠀⠀⠀⠀⠀⣸⣿ ⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⣴⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡟⠀⠀⢰⣹⡆⠀⠀⠀⠀⠀⠀⣭⣷⠀⠀⠀⠸⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠈⠉⠀⠀⠤⠄⠀⠀⠀⠉⠁⠀⠀⠀⠀⢿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⢾⣿⣷⠀⠀⠀⠀⡠⠤⢄⠀⠀⠀⠠⣿⣿⣷⠀⢸⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡀⠉⠀⠀⠀⠀⠀⢄⠀⢀⠀⠀⠀⠀⠉⠉⠁⠀⠀⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿

    source
  • ExLisper@lemmy.curiana.net ⁨7⁩ ⁨hours⁩ ago

    Couple more studies like this and you will be able to substitute all LLMs with generic “I would love to help you but my answer might be harmful so I will not tell you how to X. Would you like to ask me about something else?”

    source