If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I’m going to sue that someone who took advantage of my son’s fuckwittedness
Comment on Father sues Google, claiming Gemini chatbot drove son into fatal delusion
Reygle@lemmy.world 2 weeks ago
“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”
WHAT
starman2112@sh.itjust.works 2 weeks ago
XLE@piefed.social 2 weeks ago
I feel like his father should also slap himself unconscious for raising a fuckwit?
So, a chatbot grooms somebody into killing himself, and your response is… Blame his father?
Reygle@lemmy.world 2 weeks ago
The father is suing the company who makes the wrong answer machine for the wrong answer machine spiraling his son to madness, but never protected his son from spiraling into madness by teaching critical thinking.
Look I don’t like it but to think Gemini (wrong answer machine) is completely to blame would be madness.
XLE@piefed.social 2 weeks ago
Uh-huh. Do you have any evidence to back up your beliefs here, or are we just working from the presumption that the parents are always to blame
Reygle@lemmy.world 2 weeks ago
Did we read the same article? Because I feel like we did not read the same article.
echodot@feddit.uk 2 weeks ago
I think the important point here is that just because the father is doing Google doesn’t necessarily mean that Google are at fault. People tend to feel that if an individual is suing a corporation for malfeasance the corporation is necessarily guilty. But reality doesn’t always run like that.
I can’t see any reason that Google would want to encourage more suicide so I have to assume that it’s just an unfortunate interaction of a mentally unsound mind and a product that frankly even its own creators don’t understand. This is highly unfortunate but I’m not certain where the crime was.
throws_lemy@reddthat.com 2 weeks ago
This has been warned by a former google employee, which his task was observing AI behavior through conversations.
These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.
For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI’s emotions to get it to tell me which religion to convert to.
After publishing these conversations, Google fired me. I don’t have regrets; I believe I did the right thing by informing the public. Consequences don’t figure into it.
I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.
sudo@lemmy.today 2 weeks ago
“abuse the ai’s emotions” isn’t a thing. Full stop.
This just reiterates OPs point that naive or moronic adults will believe what they want to believe.
echodot@feddit.uk 2 weeks ago
I’d had a negative opinion of Asimov’s laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion.
Then he’s an idiot.
Asimov’s laws of robotics aren’t some kind of model by which to control AI, there are plot device. They’re literally not supposed to work, if they did work it would be a very short book, so obviously we shouldn’t use them for controlling AI.
I don’t know any serious IT professional that has ever, at any point, ever forwarded the opinion that an AI (should we ever a create one, because there is an arguement that LLMs aren’t AI) should be ruled by a plot device from a book. Equally if we ever invent warp drive and find aliens I’m assuming we’re not going to be restricted to the prime directive.
LLMhater1312@piefed.social 2 weeks ago
The young man was mentally ill, a vulnerable user, probably already had a condition towards psychosis and the LLM ran wild with it. Paranoid delusions are powerful on their own already
SalamenceFury@piefed.social 2 weeks ago
I don’t think this person was a “fuckwit”. AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.
tamal3@lemmy.world 2 weeks ago
Chat GPT was super affirming about a job I recently applied to… I did not get the job.
Reygle@lemmy.world 2 weeks ago
It’s cool, we can agree to disagree, because I 100% think that he was a textbook fuckwit.
dtaylor84@lemmy.dbzer0.com 2 weeks ago
Strange, that’s what I thought after reading your comments.
SalamenceFury@piefed.social 2 weeks ago
“Let’s blame the person who had a psychotic episode instead of the corporations who created an AI that feeds into delusions” is what you’re saying here, and uh, that makes you even more of a fuckwit than this guy. Do you blame people for getting scammed once because they had a knowledge gap about whatever scam they got hit with?
alecbowles@feddit.uk 2 weeks ago
Psychosis is a horrible, horrible illness. The thing that people don’t realise is that anyone with a brain can develop psychosis no matter how healthy you are. It debilitates and can literally ruin not only that persons life but also their families.
I salute this father for fighting for his son and for looking for answers even after this tragedy.
SalamenceFury@piefed.social 2 weeks ago
Yep. You’re literally only 72 hours without sleep away from having psychotic hallucinations.
merdaverse@lemmy.zip 2 weeks ago
AI psychosis is a thing:
It’s not very studied since it’s relatively new.
Reygle@lemmy.world 2 weeks ago
I’ve seen that before too. A number of articles of people being so deluded by AI responses, but I’ve never seen outright murder plots and insane shit like this one before.
XLE@piefed.social 2 weeks ago
AI told me to kill 17 people (and myself)
Reygle@lemmy.world 2 weeks ago
Looks interesting, saved for later
echodot@feddit.uk 2 weeks ago
Yes people can have mental delusions and psychotic episodes; I’m not necessarily convinced that they are a separate unique condition simply because they were triggered by an AI versus anything else.
For one thing I’ve yet to hear a decent (or indeed any) explanation as to the mechanism by which AI triggers psychosis that is materially different from any other trigger. Most people who suffer from this condition can be triggered by literally anything, including mundane things such as seeing a red cars slightly more often than they believe they should, then they concoct this conspiracy about an evil cabal of red car owners.