Comment on [deleted]
liftmind@lemmy.world 2 weeks agoI share that concern, which is why I architected it with strict guardrails. The system is explicitly prompted to never give medical advice and uses your actual journal data to ground the response, preventing the AI from validating delusions. Crucially, to avoid dependency, chats are never saved, they live in your browser’s RAM and vanish instantly when you refresh.
mushroommunk@lemmy.today 2 weeks ago
Guardrails only go so far when the underlying system is inherently flawed. AI is inherently flawed. A hallucination itself could kill someone vulnerable and you can’t prompt your way out of that.