Comment on [deleted]
mushroommunk@lemmy.today 2 weeks ago
AI chatbots have been shown to hurt people with mental health issues. They lean towards sycophantic inherently with the built in commands and possibly through the training data itself.
I appreciate the spirit of trying to help your fellow man, I truly do. But this is more dangerous than worth it.
liftmind@lemmy.world 2 weeks ago
I share that concern, which is why I architected it with strict guardrails. The system is explicitly prompted to never give medical advice and uses your actual journal data to ground the response, preventing the AI from validating delusions. Crucially, to avoid dependency, chats are never saved, they live in your browser’s RAM and vanish instantly when you refresh.
mushroommunk@lemmy.today 2 weeks ago
Guardrails only go so far when the underlying system is inherently flawed. AI is inherently flawed. A hallucination itself could kill someone vulnerable and you can’t prompt your way out of that.