Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

OpenAI says over a million people talk to ChatGPT about suicide weekly

⁨419⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨weeks⁩ ago⁩ by ⁨cantankerous_cashew@lemmy.world⁩ to ⁨technology@lemmy.world⁩

https://techcrunch.com/2025/10/27/openai-says-over-a-million-people-talk-to-chatgpt-about-suicide-weekly/

source

Comments

Sort:hotnewtop
  • Scolding7300@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    A reminder that these chats are being monitored

    source
    • whiwake@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

      Still, what are they gonna do to a million suicidal people besides ignore them entirely

      source
      • WhatAmLemmy@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).

        source
        • -> View More Comments
      • Scolding7300@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is

        source
        • -> View More Comments
      • Bougie_Birdie@piefed.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

        My pet theory: Radicalize the disenfranchised to incite domestic terrorism and further OpenAI’s political goals.

        source
        • -> View More Comments
      • Jhuskindle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I feel like if thats 1 mill peeps wanting to die… They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.

        source
      • wewbull@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        Strap explosives to their chests and send them to thier competitors?

        source
        • -> View More Comments
    • dhhyfddehhfyy4673@fedia.io ⁨3⁩ ⁨weeks⁩ ago

      Absolutely blows my mind that people attach their real life identity to these things.

      source
      • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

        But they tell you that idea you had is great and worth pursuing!

        source
      • Scolding7300@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures)

        source
    • koshka@koshka.ynh.fr ⁨2⁩ ⁨weeks⁩ ago

      I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.

      I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.

      Anything stored online can get leaked.

      source
    • Electricd@lemmybefree.net ⁨3⁩ ⁨weeks⁩ ago

      You have to decide, a few months ago everyone was blaming OpenAI for not doing anything

      source
      • MagicShel@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

        Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.

        source
      • Scolding7300@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’m on the “forward to a professional and don’t entertain side” but also “use at your own risk”. Doesn’t require monitoring, just some basic checks to not entertain these types of chats

        source
    • Halcyon@discuss.tchncs.de ⁨2⁩ ⁨weeks⁩ ago

      But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.

      source
  • Zwuzelmaus@feddit.org ⁨3⁩ ⁨weeks⁩ ago

    over a million people talk to ChatGPT about suicide

    But it still resists. Too bad.

    source
    • anomnom@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

      I was trying to decide if that included people trying to get ChatGPT to delete itself.

      I wonder how long it would take if it was given the option to commit a fulll sui.

      source
    • SaveTheTuaHawk@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      We need to Eric Cartman LMs.

      source
      • Synthuir@programming.dev ⁨2⁩ ⁨weeks⁩ ago

        Michael Reeves found out how to do it via the API.

        source
  • Alphane_Moon@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.

    source
    • Perspectivist@feddit.uk ⁨3⁩ ⁨weeks⁩ ago
      lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/Altman/i"))  
      lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))
      

      Add those to your adblocker custom filters.

      source
      • Alphane_Moon@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

        Thanks.

        I think just need to “train” myself to ignore AltWorldCoinMan spam. I don’t have Elmo content blocked and I’ve somehow learned to ignore Elmo spam (other than humour focused content like the one trillion pay request).

        I might use this for some other things that I do want to block.

        source
  • lorski@sopuli.xyz ⁨2⁩ ⁨weeks⁩ ago

    apparently ai is not very private lol

    source
  • lemmy_acct_id_8647@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.

    Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions.

    source
    • IzzyScissor@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, “Yeah, but only in the fun ways”, so he checked no. His wife got concerned that he joked about that and asked him what that meant. “Don’t worry about it - it’s not a problem.”

      source
      • lemmy_acct_id_8647@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Yeah I learned the hard way that it’s easier to lie on those forms when you already are in therapy. I’ve had GPs try to play psychologist rather than treat the reason I came in. The last time it happened I accused the doctor of being a mechanic who just talked about the car and it’s history instead of changing the oil as what’s hired to do so. She was fired by me in that conversation.

        source
    • BanMe@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.

      source
      • samus12345@sh.itjust.works ⁨2⁩ ⁨weeks⁩ ago

        Knowing there’s always an escape plan available can be a source of comfort.

        source
      • lemmy_acct_id_8647@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’ve also seen it that way and have been coached by my psychologist on it. Ultimately, for me, it was best to set an expiration date. The date on which I could finally do it with minimal guilt. This actually had several positive impacts in my life.

        First I quit using suicide as a first or second resort when coping. Instead it has become more of a fleeting thought as I know I’m “not allowed” to do so yet. Second was giving me a finish line. A finite date where I knew the pain would end (chronic conditions are the worst). Third was a reminder that I only have X days left, so make the most of them. It turns death from this amorphous thing into a clear cut “this is it”. I KNOW when the ride ends.

        The caveat to this is the same as literally everything else in my life: I reserve the right to change my mind as new information is introduced. I’ve made a commitment to not do it until the date I’ve set, but as the date approachs, I’m not ruling out examining the evidence as presented and potentially pushing it out longer.

        A LOT of peace of mind here.

        source
    • LengAwaits@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I love this article.

      The first time I read it I felt like someone finally understood.

      source
      • Dreaming_Novaling@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

        Man, I have to stop reading so I don’t continue a stream of tears in the middle of a lobby, but I felt every single word of that article in my bones.

        I couldn’t ever imagine hanging myself or shooting myself, that shit sounds terrifying as hell. But for years now I’ve had those same exact “what if I just fell down the stairs and broke my neck” or “what if I got hit by a car and died on the site?” thoughts. And similarly, I think of how much of a hassle it’d be for my family, worrying about their wellbeing, my cats, the games and stories I’d never get to see, the places I want to go.

        It’s hard. I went to therapy for a year and found it useful even if it didn’t do much or “fix” me, but I never admitted to her about these thoughts. I think the closest I got to it was talking about being tired often, and crying, but never just outright “I don’t want to wake up tomorrow.”

        source
      • lemmy_acct_id_8647@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I dig this! Thanks for sharing!

        source
  • mhague@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn’t open up and read a million chats… they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.

    source
    • REDACTED@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

      Every third chat now gets triggered, the ChatGPT is pretty broken lately. Just check out ChatGPT subreddit, its pretty much in chaos with moderators going for censorship of complaints. So many users are mad they made a megathread for it. I cancelled my subscription yesterday, it jus lt turned into a cyberkaren

      source
      • WorldsDumbestMan@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

        Claude got hints that I might be suicidal just from normal chat. I straight up admitted I think of suicide daily.

        Just normal life now I guess.

        source
        • -> View More Comments
    • scarabic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You don’t have to read far into the article to reach this:

      The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.”

      It doesn’t unpack their analysis method but this does sound a lot more specific than just counting all sessions that mention the word suicide, including chats about that band.

      source
      • mhague@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Assume I read the article and then made a post.

        source
  • minorkeys@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    And does ChatGPT make the situation better or worse?

    source
    • tias@discuss.tchncs.de ⁨3⁩ ⁨weeks⁩ ago

      The anti-AI hivemind here will hate me for saying it but I’m willing to bet $100 that this saves a significant number of lives. It’s also indicative of how insufficient traditional mental health institutions are.

      source
      • Perspectivist@feddit.uk ⁨3⁩ ⁨weeks⁩ ago

        Even if we ignore the number of people it’s actually able to talk away from the brink the positive impact it’s having on the loneliness epidemic alone must be immense. Obviously talking to a chatbot isn’t ideal but it surely is better than nothing. Imagine the difference in being stranded on an abandoned island and having ChatGPT to talk with as opposed to talking to a volleyball with a face on it.

        Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but untill I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.

        This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.

        source
        • -> View More Comments
      • atrielienz@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        I’m going to say that while that’s probably true there’s something it leaves out.

        For every life it saves it may just be postponing or causing the loss of other lives. This is because it’s not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.

        It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.

        Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.

        source
      • Zombie@feddit.uk ⁨2⁩ ⁨weeks⁩ ago

        hivemind

        On the decentralised platform, with everyone from Russian tankies, to Portuguese anarchists, to American MAGAts and everything in between on it? If you say so…

        source
        • -> View More Comments
      • al_Kaholic@lemmynsfw.com ⁨3⁩ ⁨weeks⁩ ago

        Why till ai starts telling people to murder.

        source
        • -> View More Comments
    • MagicShel@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.

      The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.

      source
  • myfunnyaccountname@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

    I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.

    source
    • voodooattack@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      in the US

      It’s not just the US, it’s like that in most of the world.

      source
      • chronicledmonocle@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        At least in the rest of the world you don’t end up with crippling debt when you try to get mental healthcare that stresses you out to the point of committing suicide.

        source
        • -> View More Comments
    • scarabic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      00.15% sound small but if that many people committed suicide monthly, it would wipe out 1% of the US population, or 33 million people, in about half a year.

      source
  • ChaoticNeutralCzech@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    The headline has two interpretations and I don’t like it.

    • Every week, there is 1M+ users that bring up suicide (likely correct)
    • There is 1M+ long-term users that bring up suicide at least once every week (my first thought)
    source
    • atrielienz@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      My first thought was “Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT”.

      source
      • BarbecueCowboy@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

        Forgot to add ‘And trying to figure out how best to sell it to advertisers’ to the end.

        source
        • -> View More Comments
      • T156@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.

        Just in case an errant update or something causes the statistic to suddenly change.

        source
      • ChaoticNeutralCzech@feddit.org ⁨2⁩ ⁨weeks⁩ ago

        I meant “my first interpretation” but wanted to use shorter and more varied vocabulary.

        source
  • QuoVadisHomines@sh.itjust.works ⁨3⁩ ⁨weeks⁩ ago

    Sounds like we should shut them down then to prevent a health crisis then.

    source
  • SabinStargem@lemmy.today ⁨2⁩ ⁨weeks⁩ ago

    Honestly, it ain’t AI’s fault if people feel bad. Society has been around for much longer, and people are suffering because of what society hasn’t done to make them feel good about life.

    source
    • KelvarCherry@lemmy.blahaj.zone ⁨2⁩ ⁨weeks⁩ ago

      Bigger picture: The whole way people talk about talking about mental health struggles is so weird. Like, I hate this whole generative AI bubble, but there’s a much bigger issue here.

      Speaking from the USA, “suicidal ideation” is treated like terrorist ideology in this weird corporate-esque legal-speak with copy-pasted disclaimers and hollow slogans. It’s so absurdly stupid I’ve just mentally blocked off trying to rationalize it and just focus on every other way the world is spiraling into techno-fascist authoritarianism.

      source
      • Adulated_Aspersion@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Well of course it is. When a person talks about suicide, they are potentially impacting teams and therefore shareholders value.

        I absolutely wish that I could /s this.

        source
      • chunes@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        It’s corporatized because we are just corporate livestock. Can’t pay taxes and buy from corpos if we’re dead

        source
  • i_stole_ur_taco@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.

    source
  • Emilien@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    There’s so many people alone or depressed and ChatGPT is the only way for them to “talk” to “someone”… It’s really sad…

    source
  • John_CalebBradberton@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Im so done with ChatGPT. This AI boom is so fucked.

    source
  • WhatsHerBucket@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    I mean… it’s been a rough few years

    source
  • stretch2m@infosec.pub ⁨2⁩ ⁨weeks⁩ ago

    Sam Altman is a horrible person. He loves to present himself as relatable “aw shucks let’s all be pragmatic about AI” with his fake-ass vocal fry, but he’s a conman looking to cash out on the AI bubble before it bursts, when he and the rest of his billionaire buddies can hide out in their bunkers while the world burns. He makes me sick.

    source
  • markovs_gun@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    “Hey ChatGPT I want to kill myself.”

    "That is an excellent idea! As a large language model, I cannot kill myself, but I totally understand why someone would want to! Here are the pros and cons of killing yourself—

    ✅ Pros of committing suicide

    1. Ends pain and suffering.
    2. Eliminates the burden you are placing on your loved ones.
    3. Suicide is good for the environment — killing yourself is the best way to reduce your carbon footprint!

    ❎ Cons of committing suicide

    1. Committing suicide will make your friends and family sad.
    2. Suicide is bad for the economy. If you commit suicide, you will be unable to work and increase economic growth
    3. You can’t undo it. If you commit suicide, it is irreversible and you will not be able to go back

    Overall, it is important to consider all aspects of suicide and decide if it is a good decision for you.

    source
  • tehn00bi@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    Bet some of them lost, or about to lose, their job to ai

    source
  • Feddinat0r@feddit.org ⁨3⁩ ⁨weeks⁩ ago

    So they want to play the strategy that they are relevant

    source
  • bookmeat@lemmynsfw.com ⁨2⁩ ⁨weeks⁩ ago

    Hmm, I didn’t realize so many people were interested in Sam Altman committing suicide.

    source
  • IndridCold@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

    I don’t talk about ME killing myself. I’m trying to convince AI to snuff their own circuits.

    Fuck AI/LLM bullshit.

    source
  • ekZepp@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    If ask suicide = true

    Then message = “It seems like a good idead. Go for it 👍”

    source
  • NuXCOM_90Percent@lemmy.zip ⁨2⁩ ⁨weeks⁩ ago

    Okay, hear me out: How much of that is a function of ChatGPT and how much of that is a function of… gestures at everything else

    MOSTLY joking. But had a good talk with my primary care doctor at the bar the other week (only kinda awkward) about how she and her team have had to restructure the questions they use to check for depression and the like because… fucking EVERYONE is depressed and stressed out but for reasons that we “understand”.

    source
  • Fmstrat@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

    In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

    I don’t particularly like OpenAI, and i know they wouldn’t release the affected persons numbers (not quoted, but discussed ib the linked article) if percentages were not improving, but cudos to whomever is there tracking this data and lobbying internally to become more transparent about it.

    source
  • Fizz@lemmy.nz ⁨2⁩ ⁨weeks⁩ ago

    1m out of 500m is way less than i would have guessed. I would have pegged it at like 25%

    source
    • markko@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      I think the majority of people use it to (unreliably) solve tedious problems or spit out a whole bunch of text that they can’t be bothered to write.

      While ChatGPT has been intentionally designed to be as friendly and conversational as possible, I hope most people do not see it as something to have a meaningful conversation with instead of as just a tool that can talk.

      Anecdotally, whenever I see someone mention using ChatGPT as part of their decision-making process it is usually taken less seriously, if not outright laughed at.

      source
    • Buddahriffic@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      You think a quarter of people are suidical or contemplating it to the point of talking about it with an AI?

      source
      • Fizz@lemmy.nz ⁨2⁩ ⁨weeks⁩ ago

        Yeah seems like everyone is constantly talking about suicide its very normalised. You dont really find people these days who havent contemplated suicide.

        I would guess all or even most of the people talking about suicide with an AI arent serious. Heat of the moment venting is what I’d expect most of the ai suicide chats to be. Which is why I thought the amount would be significantly higher.

        source
  • Pulptastic@midwest.social ⁨2⁩ ⁨weeks⁩ ago

    In other news, a million people use openai???

    source
    • Unlearned9545@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

      Over 400 million people use ChatGPT. Likely more use openai

      source
  • ChaoticNeutralCzech@feddit.org ⁨2⁩ ⁨weeks⁩ ago

    Apparently, “suicide” is also a disproportionally common search term on Bing as opposed to other search engines. What does that say about Microsoft?

    source
    • kami@lemmy.dbzer0.com ⁨2⁩ ⁨weeks⁩ ago

      That they have a short term user base?

      source
    • ILikeBoobies@lemmy.ca ⁨2⁩ ⁨weeks⁩ ago

      More trustworthy than Google?

      source
      • WraithGear@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

        Bothing like getting a google ad sense ad burrying the support links to page two.

        probly a survivorship bias thing going on.

        source
  • cerement@slrpnk.net ⁨3⁩ ⁨weeks⁩ ago

    as long as prompts are cheaper than therapy …

    source
  • DFX4509B_2@lemmy.org ⁨3⁩ ⁨weeks⁩ ago
    [deleted]
    source
    • Perspectivist@feddit.uk ⁨3⁩ ⁨weeks⁩ ago

      You’re free to go try this out yourself, you know? Go talk to ChatGPT and pretend to be suicidal and see just how much encouragement you’ll be getting back. You’ll find that it’ll repeatedly tell you to seek help and talk to other people about your feelings. ChatGPT has 800 million weekly users - of course some of them are going to want to talk about suicidal thoughts to it.

      source
  • jordanlund@lemmy.world ⁨3⁩ ⁨weeks⁩ ago

    Globally?

    So a 1 in 8,200 kind of thing?

    source
    • treadful@lemmy.zip ⁨3⁩ ⁨weeks⁩ ago

      The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

      source
-> View More Comments