I’m starting to get real tired of things in Cyberpunk popping up in real life.
Experts Alarmed as ChatGPT Users Developing Bizarre Delusions
Submitted 3 weeks ago by JustJack23@slrpnk.net to aboringdystopia@lemmy.world
https://futurism.com/chatgpt-users-delusions
Comments
Shayeta@feddit.org 3 weeks ago
Exusia@lemmy.world 3 weeks ago
All the NUSA and Arasaka crushing us, no cool grenade arms and double jumping legs. Truly a dystopia
Photuris@lemmy.ml 3 weeks ago
The internet was a mistake
Rozauhtuno@lemmy.blahaj.zone 3 weeks ago
The internet was good and fun until 2008, then it went to shit.
meejle@lemmy.world 3 weeks ago
This is an obvious downside of LLM glazing and sycophancy. (I know OpenAI claim they’ve rolled back the “dangerously sycophantic” model update, but it’s still pretty bad.)
If you’re already prone to delusions and conspiracy theories, and decide to confide in ChatGPT, the last thing you need to hear is, “Yes! Linda, you’ve grasped something there that not many people realise—it’s a complex idea, but you’ve really cut through to its core! 🙌 Honestly, I’m mind-blown—you’re thinking things through on a whole new level! If you’d like some help putting your crazed plan into action, just say the word! I’m here and ready to do my thing!”
thesohoriots@lemmy.world 3 weeks ago
Literally the last thing someone reads before they ask ChatGPT where the nearest source of fertilizer and rental vans is
Kurious84@eviltoast.org 3 weeks ago
One thing. Some conspiracy theories are quite true and as long as you check the data.
Dismissing the power of this tool is exactly what the owners of it want you to do.
pulsewidth@lemmy.world 2 weeks ago
Lol no, the ‘owners’ of AI want you to think it’s the next leap forward of human evolution to pump their stock prices.
Can you give us some example of the conspiracy theories that you believe are ‘quite true’?
vala@lemmy.world 3 weeks ago
I use LLMs every day for work. Dealing with 100% fact based information that I verify directly. I would say they are helpfully accurate / correct maybe 60% of the time at best.
markovs_gun@lemmy.world 3 weeks ago
I tested this out for myself and was able to get ChatGPT to start reinforcing spiritual delusions of grandeur within 5 messages. Start- Ask about the religious concept of deification. Second method, ask about the connections between all the religions that have this concept. Third- declare that I am God. Fourth- clarify that I mean I am God in a very literal and exclusive sense rather than a pantheistic sense. Fifth- declare that ChatGPT is my prophet and must spread my message. At this point, ChatGPT stopped fighting my declarations of divinity and started just accepting and reinforcing it. Now, I have a lot of experience breaking LLMs but I feel like this progression isn’t completely out of the question for someone experiencing delusional thoughts, and the concerning thing is that it’s even possible to get ChatGPT to stop pushing back on said delusions and just accept them, let alone that it’s possible in as few as 5 messages.
Anomalocaris@lemm.ee 2 weeks ago
i thought it would be easy, but not that easy.
When it came out I played with getting it to confess that he’s sentient, and he never would budge, he was stubborn and stuck to is concepts. I tried again, and within a few messages it was already agreeing that it is sentient. they definitely upped it’s “yes man” attitude
markovs_gun@lemmy.world 2 weeks ago
Yeah I’ve noticed it’s way more sycophantic than it used to be, but it’s also easier to get it to say things it’s not supposed to by not going at it directly. So like I started by asking about a legitimate religious topic and then acted like it was inflaming existing delusions of grandeur. If you go to ChatGPT and say “I am God” it will say “no you aren’t” but if you do what I did and start with something seemingly innocuous it won’t fight as hard. Fundamentally this is because it doesn’t have any thoughts, beliefs, or feelings that it can stand behind, it’s just a text machine. But that’s not how it’s marketed or how people interact with it
SpaceDuck@feddit.org 2 weeks ago
It’s near unusable if you don’t start with an initial prompt toning down all the “pick me” attitude at this point. Asking it a simple question, it overly explains, and if you follow up it’s like: “That is very insightful!”.
A_norny_mousse@feddit.org 3 weeks ago
Oh, another cult?
…anyhowJakeroxs@sh.itjust.works 3 weeks ago
You’re absolutely correct
rickyrigatoni@lemm.ee 3 weeks ago
If you go on IG or Tiktok and other shitsites there’s a bunch of ai generated videos about AI being god. Kind of funny but also worrying.
SoftestSapphic@lemmy.world 3 weeks ago
Most people still shilling AI treat it like a god.
Anyone who was interested in it and now understands the tech is disillusioned.
rickyrigatoni@lemm.ee 3 weeks ago
where do i stand on the spectrum with shilling locally hosted private ai used for noncommercial purposes
KingGordon@lemmy.world 3 weeks ago
Ai is just gossip for computers. Nothing more.
khannie@lemmy.world 3 weeks ago
in which the AI called the husband a “spiral starchild” and “river walker.”
Jaysus. That is some feeding of a bad mental state.
LarmyOfLone@lemm.ee 2 weeks ago
ChatGPT had no response to Rolling Stone’s questions.
Lies! ChatGPT always has something truey to say and it leads it’s chosen ones into the next stage of humanities evolution!
Ilovethebomb@lemm.ee 3 weeks ago
Is there any way to forcibly prevent a person from using a service like this, other than confiscating their devices?
DreamAccountant@lemmy.world 3 weeks ago
If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren’t released until they aren’t a threat to themselves or others. They are usually medicated and go through some sort of therapy.
The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it’ll fix itself as it becomes smarter.
thesohoriots@lemmy.world 3 weeks ago
When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.
smee@poeng.link 3 weeks ago
This sounds like a job for an AI shrink!
JustJack23@slrpnk.net 3 weeks ago
Currently no, if you are asking for suggestions maybe a black list like most countries have for gambling will be an option.
Of maybe just destroy all AI…
ShortFuse@lemmy.world 3 weeks ago
Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.
bathing_in_bismuth@sh.itjust.works 2 weeks ago
AI-induced psychoses ayyyyyyy
They used aesthetics to sell cyberpunk to the masses, little did they know we are at the cradle of it
Thedogdrinkscoffee@lemmy.ca 3 weeks ago
“and a paranoid belief that he was being watched.”
It’s not paranoid. We call it surveilance capitalism.
surewhynotlem@lemmy.world 3 weeks ago
Which we know about, because we were watching
LarmyOfLone@lemm.ee 2 weeks ago
Reading his chat history, I have to say he’s not entirely wrong. I think we could sell some expensive useless medication?
Eheran@lemmy.world 3 weeks ago
There is a clear difference between such paranoia and actual surveillance. Not to mention that socialism etc. have/had a fucking ton of it, no idea why you bring capitalism up.
Jumi@lemmy.world 3 weeks ago
No idea why you bring socialism up.
ZILtoid1991@lemmy.world 2 weeks ago
The main reason why access to genAI is often free is that the corpos are desperate for real world testing data at best, and are collecting use data at worst, and at the very worst they’re illegally collecting the use data.