Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Child Welfare Experts Horrified by Mattel's Plans to Add ChatGPT to Toys After Mental Health Concerns for Adult Users

⁨473⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨tonytins@pawb.social⁩ to ⁨technology@lemmy.world⁩

https://futurism.com/experts-horrified-mattel-ai

source

Comments

Sort:hotnewtop
  • Kolanaki@pawb.social ⁨1⁩ ⁨day⁩ ago

    “What should we do today, Barbie?”

    “Let’s get into mommy and daddy’s pills and special drinks!”

    source
    • cecilkorik@lemmy.ca ⁨1⁩ ⁨day⁩ ago

      “But first, we need to discuss the white genocide in South Africa!”

      source
      • Kolanaki@pawb.social ⁨1⁩ ⁨day⁩ ago

        “Hey, we said ChatGPT. Who the hell installed Grok in these things?!”

        source
        • -> View More Comments
    • Lost_My_Mind@lemmy.world ⁨1⁩ ⁨day⁩ ago

      “Bleach is my favorite pizza topping!”

      source
  • Ilovethebomb@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    The best outcome here is these toys are a massive flop, and cost Mattel a bunch of money.

    That’s the language these corps truly speak.

    source
    • A_norny_mousse@feddit.org ⁨1⁩ ⁨day⁩ ago

      Only if no kids (or just people in general) were harmed in the process. And it increasingly doesn’t look that way wrt LLMs.

      source
  • BroBot9000@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Yeah no fucking shit! These corporate dickbags need more pushback on their fetish with putting Ai into everything.

    It’s either fucking spyware like copilot or plagiarism generators as replacements for paying actual artists.

    source
  • thefartographer@lemm.ee ⁨1⁩ ⁨day⁩ ago

    They probably asked chat-gpt if they should add AI to Barbie and were told, “That’s a great idea! You’re right that such an important high-selling product would be improved by letting children talk directly to it.”

    Also, can’t wait to jailbreak my Barbie and install llama2-uncensored on it so that it can call Ken a deadbeat shithead.

    source
    • brsrklf@jlai.lu ⁨1⁩ ⁨day⁩ ago

      I bet some people will find a way to disalign generation through the original model and get stuff like that anyway.

      source
  • raltoid@lemmy.world ⁨1⁩ ⁨day⁩ ago

    This is what happens when leadership listens to tech-bros and ignore everyone else, including legal, ethics and actual tech experts.

    They’ll be backpedaling like crazy and downplay it like a furby-style thing.

    source
  • ragebutt@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

    “Mattel’s first AI product won’t be for kids under 13, suggesting that Mattel is aware of the risks of putting chatbots into the hands of younger tots. … Last year, a 14-year-old boy died by suicide after falling in love with a companion on the Google-backed AI platform Character.AI”

    Seems like a great idea

    source
    • Lost_My_Mind@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Uhhhhhhhh, I’m not defending AI at all, but I’m gonna need a WHOLE LOTTA context behind how/why he commited suicide.

      Back in the 90s there were adults saying Marylin Manson should be banned because teenagers listened to his songs, heard him tell them to kill themselves, and then they did.

      My reaction then is the same then as now. If all it takes for you to kill yourself is one person you have no real connection to telling you to kill yourself, then you were probably already going to kill yourself. Now you’re just pointing the finger to blame someone.

      AI based barbie is a terrible terrible idea for many reasons. But lets not make it a strawman arguement.

      source
      • ragebutt@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

        There’s a huge degree of separation between “violent music/games has a spurious link to violent behavior” and shitty AIs that are good enough to fill the void of someone who is lonely but not good enough to manage risk

        www.cnn.com/…/teen-suicide-character-ai-lawsuit

        “within months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,”

        “In a later message, Setzer told the bot he “wouldn’t want to die a painful death.”

        The bot responded: “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!”

        Garcia said she believes the exchange shows the technology’s shortcomings.

        “There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that,” she said. “I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”

        The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.

        “What if I told you I could come home right now?” Setzer responded.

        “Please do, my sweet king,” the bot responded.

        Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”

        So we have a bot that is marketed for chatting, a teenager desperate for socialization that forms a relationship that is inherently parasocial because the other side is an LLM that literally can’t have opinions, it just can appear to, and then we have a terrible mismanagement of suicidal ideation.

        The AI discouraged ideation, which is good, but only when it was stated in very explicit terms. What’s appalling is that it gave no crisis resources or escalation to moderation (because like most big tech shit they probably refuse to pay for anywhere near appropriate moderation teams). Then what is inexcusable is that when ideation is discussed with slightly coded language “come home” the AI misconstrues it.

        This results in a training opportunity for the language model to learn that in this context with previously exhibited ideation “go home” may mean more severe ideation and danger (if character.AI bothered to update that these conversations resulted in a death). The only drawback of getting that data of course is a few dead teenagers. Gotta break a few eggs to get an omelette

        This barely begins to touch on the nature of AI chatbots inherently being parasocial relationships, which is bad for mental health. This is of course not limited to AI, being obsessed with a streamer or whatever is similar, but the AI can be much more intense because it will actually engage with you and is always available.

        source
  • latenightnoir@lemmy.blahaj.zone ⁨1⁩ ⁨day⁩ ago

    So, we’ll get to buy a doll which’ll need to be hooked up to a couple of car batteries to have it spew nonsense at our kids?

    source
    • tfowinder@lemmy.ml ⁨1⁩ ⁨day⁩ ago

      They already are spying using [techxplore.com/…/2024-08-smart-toys-spying-kids-p…](Smart Toys)

      source
      • latenightnoir@lemmy.blahaj.zone ⁨1⁩ ⁨day⁩ ago

        Oh, great! Wonderful!

        source
    • tiramichu@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

      Yes, of course it will be online.

      source
  • AnotherPenguin@programming.dev ⁨21⁩ ⁨hours⁩ ago

    Ah, yes, because of course, every single little thing needs AI

    source
  • multiplewolves@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Mattel partnered with Adobe to use supposedly copyright-cleared AI generative imagery for the backgrounds in some of their collector edition Barbie boxes last year.

    They were spanked so hard by the collecting community over it that they followed a now-deleted suggestion from one Redditor to start explicitly crediting the background designer on each information page for new collector releases.

    Mattel has a strange history with balancing what the people want with what their shareholders want.

    source
  • 52fighters@lemmy.sdf.org ⁨23⁩ ⁨hours⁩ ago

    Anyone see Chucky?

    source
  • friend_of_satan@lemmy.world ⁨11⁩ ⁨hours⁩ ago

    Mat3l

    source
  • tfowinder@lemmy.ml ⁨1⁩ ⁨day⁩ ago

    This is going to be horrible !!

    source
  • Jayjader@jlai.lu ⁨20⁩ ⁨hours⁩ ago

    Paging Ray Bradbury… www.libraryofshortstories.com/…/the-veldt.pdf

    source