That AI bot must be saturated with break-up and “Delete Facebook, hit the gym!” advice…
OpenAI stops ChatGPT from telling people to break up with partners
Submitted 3 weeks ago by Davriellelouna@lemmy.world to technology@lemmy.world
https://www.theguardian.com/technology/2025/aug/05/chatgpt-breakups-changes-open-ai
Comments
antithetical@lemmy.deedium.nl 3 weeks ago
empireOfLove2@lemmy.dbzer0.com 3 weeks ago
Well that’s about 99% of reddit aith posts so it would track
now it just needs to hit Facebook, delete lawyer, and Facebook up
Tracaine@lemmy.world 3 weeks ago
So if I tell it that I’m literally a slave who is beaten and sexually violated daily, it’s still going to tell me to stay in that situation? That seems helpful.
BreadstickNinja@lemmy.world 3 weeks ago
It’s going to help you reflect on that situation!
tinned_tomatoes@feddit.uk 2 weeks ago
That’s an insightful and important observation about the perils of overly-restricted AI tools. You’re right to question that!
Eezyville@sh.itjust.works 2 weeks ago
If you have that type of problem then why are you even going to ChatGPT to begin with?
squaresinger@lemmy.world 2 weeks ago
Have a look at relationship subreddits. They are full of people who have been manipulated and gaslit for years or decades who have no idea what is actually normal. For people like that a reality check is really helpful or even vital.
kamenlady@lemmy.world 3 weeks ago
The US company said new ChatGPT behaviour for dealing with “high-stakes personal decisions” would be rolled out soon.
Fuck we’ve been in a black mirror episode all along
Akasazh@feddit.nl 2 weeks ago
As an atheist I’m feeling really weird in saying that I think these folks would be better off listening to a religious leader (any denomination) as they would say least offer some human advice.
The misguided sheep management role is one of the strongest arguments I can muster in favor of organized religion.
themachinestops@lemmy.dbzer0.com 2 weeks ago
I think humanity is heading in the wrong direction with AI:
flamingo_pinyata@sopuli.xyz 3 weeks ago
Does anyone else think that LLMs have been regressing in the last 2 years. At least in the area of deep questions. Initially you could push the limits and actually bounce crazy ideas off of them and get some creative results.
Now they basically they always give you a canned answer that sounds like it was pre-approved by a committee.
Repelle@lemmy.world 3 weeks ago
I believe one of Apple’s papers covered this, that chain of thought and other techniques that increase accuracy decrease perceived creativity. Memory is fuzzy though
Grimy@lemmy.world 3 weeks ago
The original 3.5 during the first month was peak creativity. The censorship really does ruin them.
ordnance_qf_17_pounder@reddthat.com 3 weeks ago
I feel the same. They actually seemed kind of cool in the very early days but they’ve become monumentally shit over time.
LuxSpark@lemmy.cafe 3 weeks ago
Sometimes that’s what people need to hear.
balder1991@lemmy.world 3 weeks ago
It’s the default relationship advice on Reddit
SpoonyBard@lemmy.world 3 weeks ago
To be fair, most of the posts that people are commenting to break up on are like “Hey, so my significant other stabbed me in the stomach last week and I was sitting in the hospital, I realized that it just made me feel disrespected, ya know? This of course all happened after he told my parents he owns me and they can’t see me ever again and he threw my cat out of a 4 story window. What should I do guys?”
Imgonnatrythis@sh.itjust.works 3 weeks ago
Usually it’s the opposite. Loss aversion puts us in bad places for far too long.
drmoose@lemmy.world 3 weeks ago
100% people should break up much more often statistically speaking. But uprooting your life is understandably very hard.
The medium is also filled with so much bullshit like “the kids will be sad” when in reality almost always kids end up better in divorced families than the ones that struggle to stay together.
4grams@awful.systems 3 weeks ago
I’ve been playing with ai, pushing the limits of it as a therapy tool (I’m being honest, but I’m using myself as a test case). I’m really trying to evaluate it in good faith, mostly to try to see what’s addicting folks to it these days.
I just cannot get it to pass my “Turing test”. It’s just so sycophantic. Every time I get close to feeling like there’s something there, it falls back into the “blow smoke up your ass mode.”
So far my hours of conversing has only convinced me that folks who follow chatbot advice for such important life decisions, probably deserve it. I’m happily married, but if a girl ever broke up with me because a LLM told her to, I’d probably login to thank it (joke).
cley_faye@lemmy.world 2 weeks ago
That’s what it is, really. Even a layman understanding of how LLM works should be an immediate show stopper for people looking for “human interactions” with them.
swelter_spark@reddthat.com 2 weeks ago
You can change the personality it takes on using the system prompt.