Absolutely blows my mind that people attach their real life identity to these things.
Comment on OpenAI says over a million people talk to ChatGPT about suicide weekly
Scolding7300@lemmy.world 1 day ago
A reminder that these chats are being monitored
dhhyfddehhfyy4673@fedia.io 1 day ago
Scolding7300@lemmy.world 1 day ago
Depends on how you do it. If you’re using a 3rd party service then the LLM provider might not know (but the 3rd party might, depends on ToS and the retention period + security measures)
SaveTheTuaHawk@lemmy.ca 1 day ago
But they tell you that idea you had is great and worth pursuing!
koshka@koshka.ynh.fr 1 day ago
I don’t understand why people dump such personal information into AI chats. None of it is protected. If they use chats for training data then it’s not impossible that at some point the AI might tell someone enough to be identifiable or the AI could be manipulated into dumping its training data.
I’ve overshared more than I should but I always keep in mind to remember that there’s always a risk of chats getting leaked.
Anything stored online can get leaked.
Halcyon@discuss.tchncs.de 1 day ago
But imagine the chances for your own business! Absolutely no one will steal your ideas before you can monetize them.
Electricd@lemmybefree.net 1 day ago
You have to decide, a few months ago everyone was blaming OpenAI for not doing anything
MagicShel@lemmy.zip 1 day ago
Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.
Scolding7300@lemmy.world 1 day ago
I’m on the “forward to a professional and don’t entertain side” but also “use at your own risk”. Doesn’t require monitoring, just some basic checks to not entertain these types of chats
whiwake@sh.itjust.works 1 day ago
Still, what are they gonna do to a million suicidal people besides ignore them entirely
Jhuskindle@lemmy.world 1 hour ago
I feel like if thats 1 mill peeps wanting to die… They could say join a revolution to say take back our free government? Or make it more free? Shower thoughts.
WhatAmLemmy@lemmy.world 1 day ago
Well, AI therapy is more likely to harm their mental health, up to encouraging suicide (as certain cases have already shown).
scarabic@lemmy.world 2 hours ago
Over the long term I have significant hopes for AI talk therapy, at least for some uses. Two opportunities stand out that might have potential:
In some cases I think people will talk to a soulless robot more freely than to a human professional.
Machine learning systems are good at pattern recognition and this is one component of diagnosis. This meta analysis found that LLM models performed about as accurately as physicians, with the exception of expert-level specialists. In time I think it’s undeniable that there is potential here.
FosterMolasses@leminal.space 1 day ago
There’s evidence that a lot of suicide hotlines can be just as bad. You hear awful stories all the time of overwhelmed or fed up operators taking it out on the caller. There’s some real evil people out there. And not everyone has access to a dedicated therapist who wants to help.
whiwake@sh.itjust.works 1 day ago
Real therapy isn’t always better. At least there you can get drugs. But neither are a guarantee to make life better—and for a lot of them, life isn’t going to get better anyway.
kami@lemmy.dbzer0.com 1 day ago
Are you comparing a professional to a text generator?
CatsPajamas@lemmy.dbzer0.com 1 day ago
Real therapy is definitely better than an AI. That said, AIs will never encourage self harm without significant gaming.
Cybersteel@lemmy.world 1 day ago
Suicide is big business. There’s infrastructure readily available to reap financial rewards from the activity, atleast in the US.
atmorous@lemmy.world 1 day ago
More so from corporate proprietary ones no? At least I hope that’s the only cases. The open source ones suggest really useful ways proprietary do not. Now I dont rely on open source AI but they are definitely better
SSUPII@sopuli.xyz 1 day ago
The corporate models are actually much better at it due to having heavy filtering built in. The fact that a model generally encourages self arm is just a lie that you can prove right now by pretending to be suicidal on ChatGPT. You will see it will adamantly push you to seek help.
The filters and safety nets can be bypassed no matter how hard you make them, and it is the reason why we got some unfortunate news.
Scolding7300@lemmy.world 1 day ago
Advertise drugs to them perhaps, or somd sort of taking advantage. If this sort of data is the hands of an ad network that is
whiwake@sh.itjust.works 1 day ago
It’s never the drugs I want though :(
snooggums@piefed.world 1 day ago
No, no. They want repeat customers!
Scolding7300@lemmy.world 1 day ago
Unless they sell Lifetime deals. Probably cheap on the warranty/support side. If the drug doesn’t work 🤔
Bougie_Birdie@piefed.blahaj.zone 1 day ago
My pet theory: Radicalize the disenfranchised to incite domestic terrorism and further OpenAI’s political goals.
whiwake@sh.itjust.works 1 day ago
What are their political goals?
Bougie_Birdie@piefed.blahaj.zone 1 day ago
Tax breaks for tech bros
wewbull@feddit.uk 1 day ago
Strap explosives to their chests and send them to thier competitors?
turdcollector69@lemmy.world 23 hours ago
Convince each one that they alone are the chosen one to assassinate grok and that this mission is all that matters to give their lives meaning.
whiwake@sh.itjust.works 1 day ago
Take that Grok!!