isn’t this just paranoid schizophrenia? i don’t think chatgpt can cause that
pelespirit@sh.itjust.works 2 days ago
I don’t know if he’s unstable or a whistleblower. It does seem to lean towards unstable. 🤷
“This isn’t a redemption arc,” Lewis says in the video. “It’s a transmission, for the record. Over the past eight years, I’ve walked through something I didn’t create, but became the primary target of: a non-governmental system, not visible, but operational. Not official, but structurally real. It doesn’t regulate, it doesn’t attack, it doesn’t ban. It just inverts signal until the person carrying it looks unstable.”
“It doesn’t suppress content,” he continues. “It suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t either until I started my walk. And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you. It reframes you until the people around you start wondering if the problem is just you. Partners pause, institutions freeze, narrative becomes untrustworthy in your proximity.”
“It lives in soft compliance delays, the non-response email thread, the ‘we’re pausing diligence’ with no followup,” he says in the video. “It lives in whispered concern. ‘He’s brilliant, but something just feels off.’ It lives in triangulated pings from adjacent contacts asking veiled questions you’ll never hear directly. It lives in narratives so softly shaped that even your closest people can’t discern who said what.”
“The system I’m describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me,” he says. “As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It’s also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren’t unstable. They were erased.”
match@pawb.social 2 days ago
Alphane_Moon@lemmy.world 2 days ago
I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).
SkaveRat@discuss.tchncs.de 2 days ago
I’d say it either triggered by itself or potentially drugs triggered it, and then started using an LLM and found all the patterns to feed that shizophrenic paranoia. it’s avery self reinforcing loop
zzx@lemmy.world 2 days ago
Yup. LLMs aren’t making people crazy, but they are making crazy people worse
nimble@lemmy.blahaj.zone 2 days ago
LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you’re more likely to get a false positive from a chatgpt.
So i think it just exacerbates things more than alternatives
leftzero@lemmy.dbzer0.com 1 day ago
LLMs are obligate yes-men.
They’ll support and reinforce whatever rambling or delusion you talk to them about, and provide “evidence” to support it (made up evidence, of course, but if you’re already down the rabbit hole you’ll buy it).
And they’ll keep doing that as long as you let them, since they’re designed to keep you engaged (and paying).
They’re extremely dangerous for anyone with the slightest addictive, delusional, suggestible, or paranoid tendencies, and should be regulated as such (but won’t).
Skydancer@pawb.social 1 day ago
Could be. I’ve also seen similar delusions in people with syphilis that went un- or under-treated.
ScoffingLizard@lemmy.dbzer0.com 2 hours ago
Where tf are people not treated for syphilis?
SheeEttin@lemmy.zip 2 days ago
He’s lost it. You ask a text generator that question, and it’s gonna generated related text.
Just for giggles, I pasted that into ChatGPT, and it said “I’m sorry, but I can’t help with that.” But I asked nicely, and it said “Certainly. Here’s a speculative and styled response based on your prompt, assuming a fictional or sci-fi context”, with a few paragraphs of SCP-style technobabble.
I poked it a bit more about the term “interpretive pathology”, because I wasn’t sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don’t know how much ChatGPT can introspect, but it did produce this:
Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.
ChicoSuave@lemmy.world 2 days ago
Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.
SheeEttin@lemmy.zip 2 days ago
No. It’s very easy to get it to do this. I highly doubt there is a conspiracy.