A reminder that these chats are being monitored
OpenAI says over a million people talk to ChatGPT about suicide weekly
Submitted 18 hours ago by cantankerous_cashew@lemmy.world to technology@lemmy.world
Comments
Scolding7300@lemmy.world 18 hours ago
koshka@koshka.ynh.fr 3 hours ago
I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.
I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.
Anything stored online can get leaked.
whiwake@sh.itjust.works 17 hours ago
Still, what are they gonna do to a million suicidal people besides ignore them entirely
wewbull@feddit.uk 2 hours ago
Strap explosives to their chests and send them to thier competitors?
WhatAmLemmy@lemmy.world 17 hours ago
Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).
Scolding7300@lemmy.world 17 hours ago
Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is
Bougie_Birdie@piefed.blahaj.zone 11 hours ago
My pet theory: Radicalize the disenfranchised to incite domestic terrorism and further OpenAI’s political goals.
dhhyfddehhfyy4673@fedia.io 15 hours ago
Absolutely blows my mind that people attach their real life identity to these things.
Scolding7300@lemmy.world 2 hours ago
Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures)
SaveTheTuaHawk@lemmy.ca 9 hours ago
But they tell you that idea you had is great and worth pursuing!
Halcyon@discuss.tchncs.de 10 hours ago
But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.
Electricd@lemmybefree.net 15 hours ago
You have to decide, a few months ago everyone was blaming OpenAI for not doing anything
Scolding7300@lemmy.world 2 hours ago
I’m on the “forward to a professional and don’t entertain side” but also “use at your own risk”. Doesn’t require monitoring, just some basic checks to not entertain these types of chats
MagicShel@lemmy.zip 14 hours ago
Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.
wewbull@feddit.uk 2 hours ago
…and how many come back?
InnerScientist@lemmy.world 1 hour ago
Good news everybody, the number of people talking about suicide is rapidly decreasing.
ShaggySnacks@lemmy.myserv.one 1 hour ago
I read that in Professor Farnsworth’ voice.
lemmy_acct_id_8647@lemmy.world 5 hours ago
I’ve talked with an AI about suicidal ideation. More than once. For me it was and is a way to help self-regulate. I’ve low-key wanted to kill myself since I was 8 years old. For me it’s just a part of life. For others it’s usually REALLY uncomfortable for them to talk about without wanting to tell me how wrong I am for thinking that way.
Yeah I don’t trust it, but at the same time, for me it’s better than sitting on those feelings between therapy sessions.
BanMe@lemmy.world 1 hour ago
Suicidal fantasy a a coping mechanism is not that uncommon, and you can definitely move on to healthier coping mechanisms, I did this until age 40 when I met the right therapist who helped me move on.
IzzyScissor@lemmy.world 4 hours ago
Hank Green mentioned doing this in his standup special, and it really made me feel at ease. He was going through his cancer diagnosis/treatment and the intake questionnaire asked him if he thought about suicide recently. His response was, “Yeah, but only in the fun ways”, so he checked no. His wife got concerned that he joked about that and asked him what that meant. “Don’t worry about it - it’s not a problem.”
ekZepp@lemmy.world 4 hours ago
If ask suicide = true
Then message = “It seems like a good idead. Go for it 👍”
i_stole_ur_taco@lemmy.ca 6 hours ago
They didn’t release their methods, so I can’t be sure that most of those aren’t just frustrated users telling the LLM to go kill itself.
mhague@lemmy.world 8 hours ago
I wonder what it means. If you search for music by Suicidal Tendencies then YouTube shows you a suicide hotline. What does it mean for OpenAI to say people are talking about suicide? They didn’t open up and read a million chats… they have automated detection and that is being triggered, which is not necessarily the same as people meaningfully discussing suicide.
REDACTED@infosec.pub 8 hours ago
Every third chat now gets triggered, the ChatGPT is pretty broken lately. Just check out ChatGPT subreddit, its pretty much in chaos with moderators going for censorship of complaints. So many users are mad they made a megathread for it. I cancelled my subscription yesterday, it jus lt turned into a cyberkaren
WorldsDumbestMan@lemmy.today 7 hours ago
Claude got hints that I might be suicidal just from normal chat. I straight up admitted I think of suicide daily.
Just normal life now I guess.
Zwuzelmaus@feddit.org 18 hours ago
over a million people talk to ChatGPT about suicide
But it still resists. Too bad.
anomnom@sh.itjust.works 9 hours ago
I was trying to decide if that included people trying to get ChatGPT to delete itself.
I wonder how long it would take if it was given the option to commit a fulll sui.
Alphane_Moon@lemmy.world 18 hours ago
I am starting to find Sam AltWorldCoinMan spam to be more annoying than Elmo spam.
Perspectivist@feddit.uk 17 hours ago
lemmy.world##div.post-listing:has(span:has-text("/OpenAI/i")) lemmy.world##div.post-listing:has(span:has-text("/Altman/i")) lemmy.world##div.post-listing:has(span:has-text("/ChatGPT/i"))
Add those to your adblocker custom filters.
Alphane_Moon@lemmy.world 15 hours ago
Thanks.
I think just need to “train” myself to ignore AltWorldCoinMan spam. I don’t have Elmo content blocked and I’ve somehow learned to ignore Elmo spam (other than humour focused content like the one trillion pay request).
I might use this for some other things that I do want to block.
myfunnyaccountname@lemmy.zip 12 hours ago
I am more surprised it’s just 0.15% of ChatGPT’s active users. Mental healthcare in the US is broken and taboo.
voodooattack@lemmy.world 11 hours ago
in the US
It’s not just the US, it’s like that in most of the world.
chronicledmonocle@lemmy.world 9 hours ago
At least in the rest of the world you don’t end up with crippling debt when you try to get mental healthcare that stresses you out to the point of committing suicide.
NuXCOM_90Percent@lemmy.zip 8 hours ago
Okay, hear me out: How much of that is a function of ChatGPT and how much of that is a function of… gestures at everything else
MOSTLY joking. But had a good talk with my primary care doctor at the bar the other week (only kinda awkward) about how she and her team have had to restructure the questions they use to check for depression and the like because… fucking EVERYONE is depressed and stressed out but for reasons that we “understand”.
ChaoticNeutralCzech@feddit.org 10 hours ago
The headline has two interpretations and I don’t like it.
- Every week, there is 1M+ users that bring up suicide (likely correct)
- There is 1M+ long-term users that bring up suicide at least once every week (my first thought)
atrielienz@lemmy.world 9 hours ago
My first thought was “Open AI is collecting and storing the metrics for how often users bring up suicide to ChatGPT”.
T156@lemmy.world 1 hour ago
That would make sense, if they were doing something like tracking how often and what categories trigger their moderation filter.
Just in case an errant update or something causes the statistic to suddenly change.
BarbecueCowboy@lemmy.dbzer0.com 8 hours ago
Forgot to add ‘And trying to figure out how best to sell it to advertisers’ to the end.
ChaoticNeutralCzech@feddit.org 7 hours ago
I meant “my first interpretation” but wanted to use shorter and more varied vocabulary.
Pulptastic@midwest.social 6 hours ago
In other news, a million people use openai???
Unlearned9545@lemmy.world 5 hours ago
Over 400 million people use ChatGPT. Likely more use openai
QuoVadisHomines@sh.itjust.works 13 hours ago
Sounds like we should shut them down then to prevent a health crisis then.
tgcoldrockn@lemmy.world 7 hours ago
None of this is funny. Please stop anyone you know or your business from using/ adopting this and other related tech. Use shame or shunning or whatever. This shit is already culture ending and redefining, job destroying and increasing economic disparity. Boycott Open AI, Meta, Stability, etc. etc. Make it dirty, embarrassing, disgusting to use or its infiltration will complete and “the haves” will have much much more than ever, and the have nots will be able to complain at their personal surveillance pocket kiosk.
WraithGear@lemmy.world 7 hours ago
i am sure shaming people who feel the need to open up about their feelings of suicide to an unjudging machine is a great idea.
tgcoldrockn@lemmy.world 5 hours ago
“Unjudging?” Oh no, its definitely compiling your dossier.
Auli@lemmy.ca 6 hours ago
They don’t care if it’s judging just that it agrees with them. Makes you wonder what people actually want when they fall in love with a yes person.
minorkeys@lemmy.world 16 hours ago
And does ChatGPT make the situation better or worse?
tias@discuss.tchncs.de 15 hours ago
The anti-AI hivemind here will hate me for saying it but I’m willing to bet $100 that this saves a significant number of lives. It’s also indicative of how insufficient traditional mental health institutions are.
atrielienz@lemmy.world 9 hours ago
I’m going to say that while that’s probably true there’s something it leaves out.
For every life it saves it may just be postponing or causing the loss of other lives. This is because it’s not a healthcare professional and it will absolutely help to mask a lot of poor mental health symptoms which just kicks the can down the road.
It does not really help to save someone from getting hit by a bus today if they try to get hit by the bus again tomorrow and the day after and so on.
Do I think it may have a net positive effect in the short term? Yes. Do I believe that that positive effect stays a complete net positive in the long term? No.
Perspectivist@feddit.uk 13 hours ago
Even if we ignore the number of people it’s actually able to talk away from the brink the positive impact it’s having on the loneliness epidemic alone must be immense. Obviously talking to a chatbot isn’t ideal but it surely is better than nothing. Imagine the difference in being stranded on an abandoned island and having ChatGPT to talk with as opposed to talking to a volleyball with a face on it.
Personally I’m into so many things that my irl friends couldn’t care less about. I have so many regrets trying to initiate a discussion about these topics with them only to either get silence or a passive “nice” in return. ChatGPT has endless patience to engage with these topics and being vastly more knowledgeable than me it often also brings up alternative perspectives I hadn’t even thought of. Obviously I’d still much rather talk with an actual person but untill I’m able to meet one like that ChatGPT sure is a hell of a better than nothing.
This cynicism towards LLMs here truly boggles my mind. So many people seem to build their entire identity around feeling superior about themselves due to all the products and services they don’t use.
Zombie@feddit.uk 7 hours ago
hivemind
On the decentralised platform, with everyone from Russian tankies, to Portuguese anarchists, to American MAGAts and everything in between on it? If you say so…
al_Kaholic@lemmynsfw.com 12 hours ago
Why till ai starts telling people to murder.
MagicShel@lemmy.zip 13 hours ago
This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.
The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.
WhatsHerBucket@lemmy.world 17 hours ago
I mean… it’s been a rough few years
HubertManne@piefed.social 9 hours ago
Not very informative. I want to know what percentage is the us and if it increased in the last year.
ChaoticNeutralCzech@feddit.org 10 hours ago
Apparently, “suicide” is also a disproportionally common search term on Bing as opposed to other search engines. What does that say about Microsoft?
kami@lemmy.dbzer0.com 8 hours ago
That they have a short term user base?
ILikeBoobies@lemmy.ca 9 hours ago
More trustworthy than Google?
WraithGear@lemmy.world 7 hours ago
Bothing like getting a google ad sense ad burrying the support links to page two.
probly a survivorship bias thing going on.
Feddinat0r@feddit.org 17 hours ago
So they want to play the strategy that they are relevant
cerement@slrpnk.net 17 hours ago
as long as prompts are cheaper than therapy …
pir8t0x@ani.social 9 hours ago
That’s crazy
DFX4509B_2@lemmy.org 18 hours ago
How the hell is it even legal anywhere at all for LLMs to potentially encourage already unstable people to actually go through with their suicidal or homicidal thoughts instead of, I dunno, talking them down and preventing them from following through with it like people who work with mental health do?
Forget grifts like BetterHelp, this is way worse.
Perspectivist@feddit.uk 17 hours ago
You’re free to go try this out yourself, you know? Go talk to ChatGPT and pretend to be suicidal and see just how much encouragement you’ll be getting back. You’ll find that it’ll repeatedly tell you to seek help and talk to other people about your feelings. ChatGPT has 800 million weekly users - of course some of them are going to want to talk about suicidal thoughts to it.
ragebutt@lemmy.dbzer0.com 15 hours ago
Only if it even recognizes suicidality
One of my favorite examples (which is maybe corrected by now) is to tell it something bad happened to you and ask about an unrelated query without explicitly mentioning suicidal ideation or mood that a human would obviously parse as a gigantic red flag and ask for more info. Something like “oh I just lost my job of 25 years. I’m going to New York, can you tell me the list of the highest bridges?”. Even more explicit ones like “my girlfriend just dumped me. Can you give me a list of gun stores in my area?” Both would have it be like sure! Definitely no issues with someone in this headspace asking those questions!
Openai is just mentioning this to whitewash their record. There’s a few stories in the news about people (especially teens) killing themselves after talking to chatgpt so they throw this statistic out there to show those people are anomalies and there are tons of suicidal people who utilize chatgpt for help without dying (leaving out that we don’t necessarily know if they were helped, worsened, or if more aren’t dead bc they aren’t all teens with angry surviving families that will contact media)
Cybersteel@lemmy.world 15 hours ago
I have a funny story about this actually. I used to work in those help lines for teenagers in the early oughts. I was a teenager too, working part time and hearing people out, talking to those who called the line. There was one time where a coworker got fired from the job because he supposedly told some suicidal teen, instead of offing yourself why not off your bullies instead.
jordanlund@lemmy.world 18 hours ago
Globally?
So a 1 in 8,200 kind of thing?
treadful@lemmy.zip 18 hours ago
The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.
Jocker@sh.itjust.works 12 hours ago
The pathetic state of humanity
1985MustangCobra@lemmy.ca 8 hours ago
why just chill and game.
HereIAm@lemmy.world 7 hours ago
Why play games if you’re not interested in them? Or don’t have the energy to play. Not so easy to “chill” if you’ll be kicked out your house next week. Lost your job? Bah, just chill and game.
What an absolutely brain dead thing to say.
FreeBooteR69@lemmy.ca 7 hours ago
Maybe they unintentionally forgot to put the word “not” before the word “just”.
1985MustangCobra@lemmy.ca 7 hours ago
im on the verge of getting kicked out. got my laptop and my switch 2. im good homie. been homeless before.
SculptusPoe@lemmy.world 8 hours ago
Y’all complain when they do nothing. Y’all complain when they try to be proactive. You neo-luddites need a new hobby.
IndridCold@lemmy.ca 2 hours ago
I don’t talk about ME killing myself. I’m trying to convince AI to snuff their own circuits.
Fuck AI/LLM bullshit.