cross-posted from: hexbear.net/post/4958707
I find this bleak in ways it’s hard to even convey
Submitted 12 hours ago by Viking_Hippie@lemmy.dbzer0.com to aboringdystopia@lemmy.world
https://hexbear.net/pictrs/image/a00eb34d-e56e-43d4-b8fc-697ee42d1eab.jpeg
cross-posted from: hexbear.net/post/4958707
I find this bleak in ways it’s hard to even convey
People’s lack of awareness of how important accessibility is really shows in this thread.
Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.
What could go wrong?
AI-Fueled Spiritual Delusions Are Destroying Human Relationships - rollingstone.com/…/ai-spiritual-delusions-destroy…
Yeah we have spiritual delusions at home already!
Seriously, no new spiritual delusions could ever be more harmful than what we have right now.
Totally fair point but I really don’t know if that’s true. A decent number of mainstream delusions have the side effect of creating community and bringing people together, other negative aspects notwithstanding. The delusions referenced in the article are more akin to acute psychosis, as the individual becomes isolated, nobody to share delusions with but the chatbot.
With traditional mainstream delusions, there also exists a relatively clear path out, with corresponding communities. ExJW, ExChristian, etc. People are able to help others escape that particular in-group when they’re familiar with how it works. But how do you deprogram someone when they’ve been programmed with gibberish? It’s like reverse engineering a black box. This is scaring me as I write it.
Am I old fashioned for wanting to talk to real humans instead?
No. But when the options are either:
it’s quite understandable that some people choose the one that is a privacy nightmare but keeps you sane and away from some dark thoughts.
Until it doesn’t
But I want to hear other people’s vents…😥
Great idea, what could possibly go wrong?
Cheaper than paying people better, I suppose.
Let’s not pretend people aren’t skipping therapy sessions over the cost
If the title is a question, the answer is no
If the title is a question, the answer is no
A student of Betteridge, I see.
Actually I read it in a forum somewhere, but I am glad I know the source now!
What is a rhetorical question?
Enter the Desolatrix
I suppose this can be mitigated by installing a local LLM that doesn’t phone home. But there’s still a risk of getting downright bad advice since so many LLM’s just tell their users they’re always right or twist the facts to fit that view.
I’ve been guilty of this as well, I’ve used ChatGPT as a “therapist” before. It actually gives decently helpful advice, compared to what’s out there available after a google search. But I’m fully aware of the risks “down the road”, so to speak.
Oh, I know this one!
So you are actively documenting yourself sharing sensitive information about your patients?
how long will it take an ‘ai’ chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?
We’re already there. Though that violence didn’t happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.
There are ways that LLMs can be used to better one’s life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.
Obviously I’m not saying “replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives”, but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can’t calm themselves because they don’t know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can’t be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.
So the problem here is capitalism, surprising no-one.
You’re missing the most important point here; quoting:
A human therapist might not or is less likely to share any personal details about your conversations with anyone. An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.
OK, you said “capitalism” but that’s way too broad.
Also I find the example of a “mental health emergency” (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.
But if you are facing mental health issues and a free or inexpensive AI that is available and doesn’t burden your friends actually helps you, do you really care about your information and being profited from?
Put it this way, if Google was being super transparent with you and said, “we’ll help treat you, and in exchange we use your info to make a few thousand dollars.” Will you the individual say, “no thanks I’d rather pay a few hundred per therapy session instead”?
Even if you hate it, you have to admit it’s hard to say no. Especially if it works.
Yeah, well, that’s just, like, your opinion, man. And if you remove the very concept of capital gain from your “important point”, I think you’ll find your point to be moot.
I’m also going to assume you haven’t been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or “the ward” as it’s also known) before I calmed down, taking resources from the treatment of people who weren’t as unstable as I was, a problem which could’ve been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn’t an extremely rare case as you claim.
Anyway, my point stands.
ininewcrow@lemmy.ca 11 hours ago
A human therapist might not or is less likely to share any personal details about your conversations with anyone.
An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.
DaddleDew@lemmy.world 11 hours ago
Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest.
They are also pushing for the idea of an AI “social circle” for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.
What we are currently witnessing is a push for a human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.
fullsquare@awful.systems 11 hours ago
PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be
bitjunkie@lemmy.world 4 hours ago
The data isn’t useful if the person no longer exists.
Crewman@sopuli.xyz 11 hours ago
You’re not wrong, but isnt that also how Better Help works?
exu@feditown.com 11 hours ago
And you know how Better Help is very scummy?
newsweek.com/betterhelp-patients-tell-sketchy-the…
maastrichtuniversity.nl/…/how-betterhelp-scandal-…
essell@lemmy.world 10 hours ago
Better help is the Amazon of the Therapy world.