Comment on [deleted]

<- View Parent
liftmind@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

I share that concern, which is why I architected it with strict guardrails. The system is explicitly prompted to never give medical advice and uses your actual journal data to ground the response, preventing the AI from validating delusions. Crucially, to avoid dependency, chats are never saved, they live in your browser’s RAM and vanish instantly when you refresh.

source
Sort:hotnewtop