It’s bad precisely because the bot always agree with you, they are all made like that
Perspectivist@feddit.uk 1 day ago
One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.
I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.
arararagi@ani.social 1 day ago
aceshigh@lemmy.world 1 day ago
It doesn’t always agree with me. We’re in an impasse about mentoring. I keep telling it I’m not interested, it keeps telling me that given my traits i will be but I’m just not ready yet.
truthfultemporarily@feddit.org 1 day ago
It’s not better than nothing - it’s worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.
Try this, instead of asking “I am thinking xyz”, ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.
bob_lemon@feddit.org 19 hours ago
I’m fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I’m also very confident that there’ll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there’s a ton of regulation and oversight.
truthfultemporarily@feddit.org 18 hours ago
I’m not sure LLMs can do this. The reason is context poisoning. There would need to be an overseer system of some kind.
proceduralnightshade@lemmy.ml 20 hours ago
So we know that in certain cases, using chatbots as a substitute for therapy can lead to increased suffering, increases risk of harm to self and others, and amplifies symptoms of certain diagnosis. Does this mean we know it couldn’t be helpful in certain cases? No. You ingested the exact same logic corpos have with LLMs, which is “just throw it at everything”, and you seem to not notice you apply it the same way they do.
We might have enough data at some point to assess what kinds of people could benefit from “chatbot therapy” or something along those lines. Don’t get me wrong, I’d prefer we could provide more and better therapy/healthcare in general to people, and that we had less systemic issues for which therapy is just a bandage.
Yes, in total. But not necessarily in particular. That’s a big difference.
truthfultemporarily@feddit.org 18 hours ago
If you have a drink that creates a nice tingling sensation in some people and make other people go crazy, the only sane thing to do is to take that drink off the market.
proceduralnightshade@lemmy.ml 13 hours ago
Yeah but that applies to social media as well. Or, idk, amphetamine. Or fucking weed. Even meditation. Which are all still there, some more regulated than others. But that’s not what you’re getting at, your point is AI chatbots = bad and I just don’t agree with that.