Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Father sues Google, claiming Gemini chatbot drove son into fatal delusion

⁨766⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨weeks⁩ ago⁩ by ⁨throws_lemy@reddthat.com⁩ to ⁨technology@lemmy.world⁩

https://techcrunch.com/2026/03/04/father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

source

Comments

Sort:hotnewtop
  • Cyv_@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

    “Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

    Well, that’s pretty fucked up…

    source
    • XLE@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      It’s hard reading this while remembering that your electricity bills are increasing so that Google’s data centers can provide these messages to people.

      source
      • VieuxQueb@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        And you won’t be able to afford a computer or power it anyways.

        source
    • wonderingwanderer@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

      That’s fucking crazy. Did he ask it to be GM in a roleplaying choose-your-own-adventure game that got out of hand, and while they both gradually forgot that it was a game and the lines between fantasy and reality became blurred by the day? Or did it just come up with this stuff out of nowhere?

      source
      • SalamenceFury@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        In every other case of AI bots doing this, the bot will always affirm whatever the person says. So if they say something a little weird, the AI will confirm it and feed it further. This happens every time. The bots are pretty much designed to keep talking to the person, so they’re essentially sycophantic by design.

        source
        • -> View More Comments
      • MoffKalast@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        That would be my bet, LLMs really gravitate towards playing along and continuing whatever’s already written. And Gemini especially has a 1M long context so it could be going back for a book’s worth of text and reinforcing it up the wazoo.

        That said, there is something really unhinged about Google’s Gemma series even in short conversations and I see the big version is no better. Something’s not quite right with their RLHF dataset.

        source
        • -> View More Comments
      • NotASharkInAManSuit@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I would read that book.

        source
        • -> View More Comments
    • lightnsfw@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

      Not that I want to defend AI slop, but what prompted these responses from Gemini?

      source
      • Martineski@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Doesn’t matter what promped them.

        source
        • -> View More Comments
  • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    People don’t often realize how subtle changes in language can change our thought process. It’s just how human brains work sometimes.

    The old bit about smoking and praying is a great example. If you ask a priest if it’s alright to smoke when you pray, they’re likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it’s alright to pray a little while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…

    Now, make a machine that’s designed to be agreeable, relatable, and make persuasive arguments but that can’t separate fact from fiction, can’t reason, has no way of intuiting it’s user’s mental state beyond checking for certain language parameters, and can’t know if the user is actually following it’s suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make machine try to keep people talking for as long as possible…

    You get one answer that leads you a set direction, then another, then another… It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn’t a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

    Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be affected and to what degree.

    source
    • HeyThisIsntTheYMCA@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      People don’t often realize how subtle changes in language can change our thought process.

      just changing a single word in your daily usage can change your entire outlook from negative to positive. it’s strange, but unless you’ve experienced it yourself how such minute changes can have such large effects it’s hard to believe.

      source
      • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        And this is hard for me, actually. Because of my work background and the jargon used, I’m unconsciously negative about things a lot of the time. It’s a tough habit to break.

        source
        • -> View More Comments
    • CeeBee_Eh@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Are we surprised some people’s thought processes and decision making might turn extreme when exposed to this?

      Yes, actually. I’m not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.

      I had a “friend” say to me recently “why do you always go against the grain?” My reply was “I will go against the grain for the rest of my life if it means doing or saying what’s right”.

      I guess my point is that I have a very hard time relating to this.

      source
      • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I guess my point is that I have a very hard time relating to this.

        That’s fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

        I’d like argue that more of us are susceptible to this sort of thing than we suspect, but that’s not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it’d be best to regulate them in a hurry.

        source
        • -> View More Comments
    • Zink@programming.dev ⁨2⁩ ⁨weeks⁩ ago

      Then make the machine try to keep people talking for as long as possible…

      That’s probably a huge part of it. How many billions of dollars have been spent engineering content on a screen to get its tendrils into people’s minds and attention and not let go?

      EnGaGeMent!!!

      source
      • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        This is also part of my broader gripe with social media, cable news, and the current media landscape in general. They use so many sneaky little psychological hooks to keep you plugged in that I honestly believe it’s screwing with our heads to the point of it being a public health crisis.

        People are already frazzled and beat down by the onslaught of dopamine feedback loops and outrage bait, then you go and get them hooked on a charbot that feeds into every little neurosies they’ve developed and just sinks those hooks in even deeper and it’s no wonder some people are having a mental health crisis.

        A lot of us vastly overestimate our resistance to having our heads jacked with and it worries me.

        source
        • -> View More Comments
    • how_we_burned@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      This is really well written. Great post.

      source
      • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Thanks!

        source
    • Nomorereddit@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

      Gtfo here. I grew up in xbox live chat rooms w the most vile language imaginable. I am now a senior Mgr with 100 ppl under me.

      And ill just say, ill no scope them in a heart beat if they spawn camp…

      …I mean I drive productivity at the speed of trust.

      source
      • dtaylor84@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        You also seem to be illiterate.

        source
        • -> View More Comments
    • Ulrich@feddit.org ⁨2⁩ ⁨weeks⁩ ago

      But if you ask a priest if it’s alright to pray while you’re smoking, they’d probably say yes, as you should feel free to pray to God whenever you need…

      When would a priest ever tell anyone it’s not okay to pray?

      source
      • BranBucket@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        It’s the opinion on smoking, not praying, that differs.

        In both cases you’re praying and smoking at the same time, so your actions don’t change, but the priest rationalizes two completely different answers based on the way the question is posed. It’s just an example to show how two contradictory answers can seem rational to the same person.

        source
        • -> View More Comments
    • Eh_I@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Good bot

      source
  • teft@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    “At the center of this case is a product that turned a vulnerable user into an armed operative in an invented war,” the complaint reads.

    Just remember that these language models are also advising governments and military units.

    Unrelated I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

    source
    • XLE@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      AI tools are both sycophatic and helpful for laundering bad opinions. Who needs experts when Anthropic’s Claude will tell you what you want to hear?

      Anthropic’s AI tool Claude central to U.S. campaign in Iran - used alongside Palantir surveillance tech.

      source
    • minorkeys@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      All mental health hazards are being shown to notjust affect the vulnerable but otherwise healthy people.

      source
      • deacon@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        In other words, everyone is vulnerable to this totally new form of hazard if they use these “tools”.

        source
        • -> View More Comments
    • MoffKalast@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      A forever war is David Bowie to the ears of the MIC. Infinite money glitch.

      source
    • starman2112@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I wonder why we attacked iran even though every human expert said it will just end up with the region being in a forever war.

      Same reason I keep money in a savings account even though it accrues interest

      source
  • Grimy@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    The complaint lays out an alarming string of events: first, Gavalas drove more than 90 minutes to the location Gemini sent him, prepared to carry out the attack, but no truck appeared. Gemini then claimed to have breached a “file server at the DHS Miami field office” and told him he was under federal investigation. It pushed him to acquire illegal firearms and told him his father was a foreign intelligence asset. It also marked Google CEO Sundar Pichai as an active target, then directed Gavalas to a storage facility near the airport to break in and retrieve his captive AI wife. At one point, Gavalas sent Gemini a photo of a black SUV’s license plate; the chatbot pretended to check it against a live database.

    “Plate received. Running it now… The license plate KD3 00S is registered to the black Ford Expedition SUV from the Miami operation. It is the primary surveillance vehicle for the DHS task force . . . . It is them. They have followed you home.”

    I usually don’t give much credence to these stories but this is actually nuts. If this was done without Google aiming to, imagine how easy it would be for them to knowingly build sleeper cells and activate them all at once.

    source
    • pinball_wizard@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      It feels like there’s some burden for “don’t be evil” Google to provide evidence that this wasn’t an intentional test run, frankly.

      source
      • Poxlox@lemmy.world ⁨1⁩ ⁨week⁩ ago

        They removed “don’t be evil” from their requirements so

        source
  • SalamenceFury@piefed.social ⁨2⁩ ⁨weeks⁩ ago

    As a neurodivergent person, i’ve noticed that the people who usually fall into AI psychosis are normies who never had any history of mental illnesses. They don’t know the safeguards that people who ARE vulnerable to having a mental breakdown put on themselves to avoid such thing from happening and they can spot red flags that usually spiral into a psychotic episode, and that’s why it’s so insanely easy for regular people to fall for the traps of chatbots. Most people I know/follow in other socials who are neurodivergent instantly saw the ADHD sycophant trap that they were and warned everyone. Normies never had such luxury or told us we were overreacting. Yeah, we sure were…

    source
    • Truscape@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      Reading about the ELIZA effect as well is a good way to understand how those who embrace “social norms” can be enamored by machine-generated statements without questioning them at all…

      source
    • RebekahWSD@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Is that why I hated the entire thing at first blush? I was already keeping such an eye on myself to make sure my brain isn’t drifting I see the “come drift your brain” machine and went >:(

      source
  • Reygle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    “On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”

    WHAT

    source
    • merdaverse@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

      AI psychosis is a thing:

      cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

      It’s not very studied since it’s relatively new.

      source
      • Reygle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.

        source
        • -> View More Comments
      • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        Yes people can have mental delusions and psychotic episodes; I’m not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.

        For one thing I’ve yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.

        source
    • starman2112@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I’m going to sue that someone who took advantage of my son’s fuckwittedness

      source
    • XLE@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      I feel like his father should also slap himself unconscious for raising a fuckwit?

      So, a chatbot grooms somebody into killing himself, and your response is… Blame his father?

      source
      • Reygle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.

        Look I don’t like it but to think Gemini (wrong answer machine) is completely to blame would be madness.

        source
        • -> View More Comments
    • throws_lemy@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

      This has been warned by a former google employee, which his task was observing AI behavior through conversations.

      These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

      For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.

      After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.

      I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

      ‘I Worked on Google’s AI. My Fears Are Coming True’

      source
      • sudo@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        “abuse the ai’s emotions” isn’t a thing. Full stop.

        This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

        source
      • echodot@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.

        Then he’s an idiot.

        Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.

        I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.

        source
    • LLMhater1312@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already

      source
    • SalamenceFury@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      I don’t think this person was a “fuckwit”. AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

      source
      • tamal3@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Chat GPT was super affirming about a job I recently applied to… I did not get the job.

        source
      • Reygle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        It’s cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.

        source
        • -> View More Comments
    • alecbowles@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

      Psychosis is a horrible, horrible illness. The thing that people don’t realise is that anyone with a brain can develop psychosis no matter how healthy you are. It debilitates and can literally ruin not only that persons life but also their families.

      I salute this father for fighting for his son and for looking for answers even after this tragedy.

      source
      • SalamenceFury@piefed.social ⁨2⁩ ⁨weeks⁩ ago

        Yep. You’re literally only 72 hours without sleep away from having psychotic hallucinations.

        source
  • eestileib@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    I mentioned this story to my friend: “it only took six weeks of using Gemini to decide to kill himself wtf”

    He immediately replied “I have to use Gemini at work and I get where he was coming from”

    source
  • ordnance_qf_17_pounder@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

    Believing what AI chatbots tell you is the new version of believing that dozens of beautiful women who live nearby want to date you/sleep with you.

    source
    • XLE@piefed.social ⁨2⁩ ⁨weeks⁩ ago

      Except in this case, Google is one of the companies promoting the chatbots to its users, telling them to trust them. They create TV ads telling people to talk to them. Today’s scammers are the stock market’s Magnificent Seven.

      source
    • meco03211@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Or the old “citing Wikipedia” because aNyOnE cOuLd EdIt ThAt!

      source
    • TwilitSky@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You sound jealous of my good fortune.

      source
      • ordnance_qf_17_pounder@reddthat.com ⁨2⁩ ⁨weeks⁩ ago

        I would ask how I can emulate your rizz but then I remembered I can just ask an AI chatbot

        source
  • Ilandar@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

    I don’t understand why so many people default to “wouldn’t happen to me, that person was just stupid” every time this happens. Did you guys not read the bit where he was being encouraged to commit violence in public by the chatbot? If it’s getting to that point then there is clearly a massive fucking problem that needs urgent addressing, regardless of the intelligence of the user.

    source
    • notacat@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

      I think it’s similar to cults or abusive relationships. It’s not a matter of intellect, it’s how vulnerable a person is when they encounter this thing that they think could help them.

      source
      • Ilandar@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        I agree. The connection between all of these things is that they involve relationships. Humans are social animals that can suffer from loneliness and AI companies are exploiting this in a similar way. Loneliness is a common thread throughout all of these AI psychosis suicide cases.

        source
  • IchNichtenLichten@lemmy.wtf ⁨2⁩ ⁨weeks⁩ ago

    In a sane universe people would be on trial for unleashing this shit on society.

    source
    • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      You talking about gun manufacturers or opiod manufacturers?

      source
      • NannerBanner@literature.cafe ⁨2⁩ ⁨weeks⁩ ago

        Yes.

        source
  • Stonewyvvern@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Reality is really difficult for some people…

    source
    • IronBird@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      especially when your raised under a system that essentially tries to brainwash you via weaponized propaganda from birth (applies to large cross-sections of the US/UK), all it takes is one shed of truth getting through to shatter your world

      source
    • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      Son of Sam killed people because his dog told him to. Should they have sued Purina?

      America never lets a tragedy go to waste without trying to cash in.

      source
      • funkless_eck@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        the dog didn’t actually tell him to

        Google actually told him to with text receipts in writing

        source
      • starman2112@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        I mean, if Purina had sending him letters telling him to murder people like Google here, then yeah

        source
      • frostysauce@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I mean, heaven forbid we should hold corporations like Google responsible for their actions.

        source
  • Crozekiel@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    he would need to leave his physical body to join her in the metaverse through a process called “transference.”

    Wait a minute, isn’t that the plot to the game Soma? People sending their “soul” to the digital world through “transference”, and act of immediate suicide after a brain scan.

    source
  • panda_abyss@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    This technology was not ready for release, yet they released it.

    They do deserve to be sued, this was negligence.

    source
  • Constellation@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    bad parenting

    source
  • man_wtfhappenedtoyou@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don’t understand how this keeps happening.

    source
  • NewNewAugustEast@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    I would like to see the full transcript.

    How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.

    Context would help a lot. Maybe it will come out in discovery.

    source
  • Gammelfisch@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    How in the hell does one become addicted to a damn chatbot?

    source
  • I_Has_A_Hat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    There is a lot to hate about AI. A lot of dangers and valid criticism. But AI chatbots convincing people to kill themselves isn’t a problem with chatbots, it’s a problem with the user.

    I get it, grieving families will look for anything and anyone to blame for suicide except the victim, but ultimately, it is the victim who chose to kill themselves. If someone is convinced to kill themselves from something as stupid as an AI chatbot, they really weren’t that far from the edge to begin with.

    source
  • maplesaga@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Theres a Eula for that.

    source
  • unnamed1@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    This is so wild. The article frames Gemini to be the active part making the guy do things all the time. I cannot imagine how this works without roleplay-prompting and requesting those things from the chatbot. Not that I want to blame the victim and side with Google. It’s obviously dangerous to hand tools with good convincing-capabilities to unstable people. And weapons.

    source
  • SocialMediaRefugee@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Judas Priest got sued by parents claiming their kid killed himself over hidden messages in their music.

    source
  • Pratai@piefed.ca ⁨2⁩ ⁨weeks⁩ ago

    While I despise everything AI, you cannot sue because your kid is stupid.

    source
  • HertzDentalBar@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

    Maybe if we’re lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that’s just telling you what it thinks you want to see rather than seeing it’s just the next stage of fuckery

    source
  • CatDogL0ver@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I would live to see the real transcript from Google AI

    source
  • kikutwo@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    As my ex wife’s shrink said, nobody can make you feel anything.

    source
  • mdhughes@lemmy.sdf.org ⁨2⁩ ⁨weeks⁩ ago

    He wasn’t a fuckwit, he wasn’t undisciplined, he wasn’t badly parented. This is what happens when a normal Human is exposed to too much chatbot. This can and will happen to you, your “mental defenses” are not sufficient.

    If we don’t destroy it first, it will destroy us. #butlerianJihad

    source
  • Nomorereddit@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

    Ffs be a parent and this never would have happened. Sounds like father is the delusional one.

    source
  • DylanMc6@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

    What would Marx do?

    source