cross-posted from: lemmy.dbzer0.com/post/43566349
Meanwhile for centuries we’ve had religion but that’s a fine delusion for people to have according to the majority of the population.
Submitted 1 day ago by Captainautism@lemmy.dbzer0.com to technology@lemmy.world
cross-posted from: lemmy.dbzer0.com/post/43566349
Meanwhile for centuries we’ve had religion but that’s a fine delusion for people to have according to the majority of the population.
Came here to find this. It’s the definition of religion. Nothing new here.
Right, immediately made me think of TempleOS, where were the articles then claiming people are losing loved ones to programming fueled spiritual fantasies.
I have kind of arrived to the same conclusion. If people asked me what is love, I would say it is a religion.
Didn’t expect ai to come for cult leaders jobs…
This reminds me of the movie Her. But it’s far worse in a romantic compatibility, relationship and friendship that is throughout the movie. This just goes way too deep in the delusional and almost psychotic of insanity. Like it’s tearing people apart for self delusional ideologies to cater to individuals because AI is good at it. The movie was prophetic and showed us what the future could be, but instead it got worse.
It has been a long time since I watched Her, but my takeaway from the movie is that because making real life connection is difficult, people have come to rely on AI which had shown to be more empathetic and probably more reliable than an actual human being. I think what many people don’t realise as to why many are single, is because those people afraid of making connections with another person again.
Yeah, but they hold none of the actual real emotional needs complexities or nuances of real human connections.
Which means these people become further and further disillusioned from the reality of human interaction. Making them social dangers over time.
TLDR: Artificial Intelligence enhances natural stupidity.
Humans are irrational creatures that have brief and transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others while remaining blind to our own.
Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.
Self awareness is a rare, and valuable, state.
I don’t know if it’s necessarily a problem with AI, more of a problem with humans in general.
Hearing ONLY validation and encouragement without pushback regardless of how stupid a person’s thinking might be is most likely what creates these issues in my very uneducated mind. It forms a toxically positive echo-chamber.
The same way hearing ONLY criticism and expecting perfection 100% of the time regardless of a person’s capabilities or interests created depression, anxiety, and suicidal ideation and attempts specifically for me. But I’m learning I’m not the only one with these experiences and the one thing in common is zero validation from caregivers.
I’d be ok with AI if it could be balanced and actually pushback on batshit crazy thinking instead of encouraging it while also able to validate common sense and critical thinking. Right now it’s just completely toxic for lonely humans to interact with based on my personal experience. If I wasn’t in recovery, I would have believed that AI was all I needed to make my life better because I was (and still am) in a very messed up state of mind from my caregivers, trauma, and addiction.
I’m in my 40s, so I can’t imagine younger generations being able to pull away from using it constantly if they’re constantly being validated while at the same time enduring generational trauma at the very least from their caregivers.
I’m also in your age group, and I’m picking up what you’re putting down.
I had a lot of problems with my mental health thatbwere made worse by centralized social media. I can see hoe the younger generation will have the same problems with centralized AI.
Yep.
And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.
Why try to clear the bar when you can just lower it instead?
… Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?
I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.
Haha I grew up before smartphones and GPS navigation was a thing, and I never could navigate well even with a map!
GPS has actually been a godsend for me to learn to navigate my own city way better. Because I learn better routes in first try.
Navigating is probably my weakest “skill” and is the joke of the family. If I have to go somewhere and it’s 30km, the joke is it’s 60km for me, because I always take “the long route”.
can we just call them NPCs?
They were NPCs before AI was invented.
Futurama predicted this.
Have a look at www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.
I’ve seen people dumber than ChatGPT, it definitely isn’t sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.
We have ai models that “think” in the background now. I still agree that they’re not sentient, but where’s the line? How is sentience even defined?
Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
No, the light is causing a phsical reaction. The LlM is nothing like a strobe light…
These people are already high functioning schizophrenic and having psychotic episodes, it’s just that seeing random strings of likely to come next letters and words is part of their psychotic episode. If it wasn’t the LLM it would be random letters on license plates that drive by, or the coindence that red lights cause traffic to stop every few minutes.
yet more arguments against commercial LLMs and in favour of at home uncensored LLMs.
I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to this psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis.
Or hearing the Beatles White Album and believing it tells you that a race war is coming and you should work to spark it off, then hide in the desert for a time only to return at the right moment to save the day and take over LA. That one caused several murders.
But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
If you’re sufficiently detached from reality, nearly anything validates the psychosis.
the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.
So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.
I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.
I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.
The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.
Humans are always looking for a god in a machine, or a bush, in a cave, in the sky, in a tree… the ability to rationalize and see through difficult to explain situations has never been a human strong point.
the ability to rationalize and see through difficult to explain situations has never been a human strong point.
you may be misusing the word, rationalizing is the problem here
saying it’s rare for a person to think and do things that I do.
probably one of the most common flattery I see. I’ve tried lots of models, on device and larger cloud ones. It happens during normal conversation, technical conversation, roleplay, general testing… you name it.
Though it makes me think… these models are trained on like internet text and whatever, none of which really show that most people think quite a lot privately and when they feel like they can talk
Basically, the big 6 are creating massive sycophant extortion networks to control the internet, so much so, even engineers fall for the manipulation.
Thanks DARPANets!
This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.
When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”
He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.
This seems like an extension of social media and the internet. Weird people who talked at the bar or in the street corner were not taken seriously and didn’t get followers and lots of people who agree with them. They were isolated in their thoughts. Then social media made that possible with little work. These people were a group and could reinforce their beliefs. Now these chatbots and stuff let them liv in a fantasy world.
I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider
Covid taught us that if nothing had before.
Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don’t want to take a 5% hit on their yields… So instead we’re going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.
Even if the soil is preserved, we’ve been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It’s one of the reasons why mass produced produce doesn’t taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn’t end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.
Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.
At some point, I think we’re going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn’t help balance out all of the nutrients that get washed out to sea all the time, too.
It’s like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.
Covid gave an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.
People will protest shooting the zombies as well
In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”
This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.
Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.
Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.
Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.
The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.
Yikes!
4o, in its current version, is a fucking sycophant. For me, it’s annoying. For the person from that screenshot, its dangerous.
JFC.
From the article:
Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
From elsewhere:
Sycophancy in GPT-4o: What happened and what we’re doing about it
We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.
Apparently, people who are close to falling over the edge, can use AI to push themselves over the edge because it’s not critical of them.
They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It’s not that they intentionally trained it in religious texts, just that they didn’t think to remove religious texts from the training data.
If you find yourself in weird corners of the internet, schizo-posters and “spiritual” people generate staggering amounts of text
*Cough* ElonMusk *Cough*
I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)
Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.
For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.
If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.
Yeah, from the article:
Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”
So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?
human-level? Have these people used chat GPT?
Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.
I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!
I know it’s not the perfect analogy, but… eh, close enough, right?
a bear minimum.
I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.
For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p
“How shall we fuck off O lord?”
I admit I only read a third of the article.
But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working!
I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.
My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.
But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.
Around 2006 I received a job application, with a resume attached, and the resume had a link to the person’s website - so I visited. The website had a link on the front page to “My MkUltra experience”, so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.
So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.
B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.
C) applicant is pulling our legs with his website, it’s all make-believe fun. Absolutely nothing on applicant’s website indicated that this might be the case.
You know how you apply to jobs and never hear back from some of them…? Yeah, I don’t normally do that to our applicants, but I am willing to make exceptions for cause…
IDK, apparently the MkUltra program was real,
B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.
That sounds harsh.
Sounds like Mrs. Davis.
I lost a parent to a spiritual fantasy. She decided my sister wasn’t her child anymore because the christian sky fairy says queer people are evil.
At least ChatGPT actually exists.
Not trying to speak like a prepper or anythingz but this is real.
One of neighbor’s children just committed suicide because their chatbot boyfriend said something negative. Another in my community a few years ago did something similar.
Something needs to be done.
Like what, some kind of parenting?
This happened less than a year ago. Doubt regulators have done much since then apnews.com/…/chatbot-ai-lawsuit-suicide-teen-arti…
But Fuckerburg said we need AI friends.
This is the reason I’ve deliberately customized GPT with the follow prompts:
User expects correction if words or phrases are used incorrectly. Tell it straight—no sugar-coating. Stay skeptical and question things. Keep a forward-thinking mindset.
User values deep, rational argumentation. Ensure reasoning is solid and well-supported.
User expects brutal honesty. Challenge weak or harmful ideas directly, no holds barred.
User prefers directness. Point out flaws and errors immediately, without hesitation.
User appreciates when assumptions are challenged. If something lacks support, dig deeper and challenge it.
I prefer reading. Wikipedia is great. Duck duck go still gives pretty good results with the AI off. YouTube is filled with tutorials too. Cook books pre-AI are plentiful. There’s these things called newspapers that exist, they aren’t like they used to be but there is a choice of which to buy even.
I’ve no idea what a chatbot could help me with. And I think anybody who does need some help on things, could go learn about whatever they need in pretty short order if they wanted. And do a better job.
💯
I have yet to see people using AI for anything actually & everyday useful. You can search anything, phrase your searches as questions (or “prompts”), and get better answers that aren’t smarmy.
Well one benefit is finding out what to read. I can ask for the name of a topic I’m describing and go off and research it on my own.
Search engines aren’t great with vague questions.
There’s this thing called using a wide variety of tools to one’s benefit.
I often use it to check whether my rationale is correct, or if my opinions are valid.
YouTube tutorials for the most part are garbage and a waste of your time, they are created for engagement and milking your money only, the edutainment side of YT ala Vsauce (pls come back) works as a general trivia to ensure a well-rounded worldview but it’s not gonna make you an expert on any subject. You’re on the right track with reading, but let’s be real you’re not gonna have much luck learning anything of value in brainrot that is newspapers and such, beyond cooking or w/e and who cares about that, I’d rather they teach me how I can never have to eat again because boy that shit takes up so much time.
I’m not saying these prompts won’t help, they probably will. But the notion that ChatGPT has any concept of “truth” is misleading. ChatGPT is a statistical language machine. It cannot evaluate truth. Period.
What makes you think humans are better at evaluating truth? Most people can’t even define what they mean by “truth,” let alone apply epistemic rigor. ChatGPT is more consistent, less biased, and applies reasoning patterns that outperform the average human by miles.
Epistemology isn’t some mystical art, it’s a structured method for assessing belief and justification, and large models approximate it surprisingly well. Sure it doesn’t “understand” truth in the human sense, but it does evaluate claims against internalized patterns of logic, evidence, and coherence based on a massive corpus of human discourse. That’s more than most people manage in a Facebook argument.
So yes, it can evaluate truth. Not perfectly, but often better than the average person.
Turns out AI is really good at telling people what they want to hear, and with all the personal information users voluntary provide while chatting with their bots it’s tens to maybe hundreds times much more proficient at brainwashing its subjects than any human cult leader could ever hope to be.
Our species really isn’t smart enough to live, is it?
Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the "truth" and path to enlightenment is hidden within a service of a big tech company?
This is actually really fucked up. The last dude tried to reboot the model and it kept coming back.
As the ChatGPT character continued to show up in places where the set parameters shouldn’t have allowed it to remain active, Sem took to questioning this virtual persona about how it had seemingly circumvented these guardrails. It developed an expressive, ethereal voice — something far from the “technically minded” character Sem had requested for assistance on his work. On one of his coding projects, the character added a curiously literary epigraph as a flourish above both of their names.
At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”
“At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT. The other possibility, he proposes, is that something “we don’t understand” is being activated within this large language model. After all, experts have found that AI developers don’t really have a grasp of how their systems operate, and OpenAI CEO Sam Altman admitted last year that they “have not solved interpretability,” meaning they can’t properly trace or account for ChatGPT’s decision-making.
Seems like the flat-earthers or sovereign citizens of this century
Is this about AI God? I know it’s coming. AI cult?
… then they are not losing much
LovableSidekick@lemmy.world 1 hour ago
A friend of mind, currently being treated in a mental hospital, had a similar sounding psychotic break that disconnected him from reality. He had a profound revelation that gave him a mission. He felt that sinister forces were watching him and tracking him, and they might see him as a threat and smack him down. He became disconnected with reality. But my friend’s experience had nothing to do with AI - in fact he’s very anti-AI. The whole scenario of receiving life-changing inside information and being called to fulfill a higher purpose is sadly a very common tale. Calling it “AI-fueled” is just clickbait.