I’m fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I’m also very confident that there’ll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there’s a ton of regulation and oversight.
truthfultemporarily@feddit.org 16 hours ago
I’m not sure LLMs can do this. The reason is context poisoning. There would need to be an overseer system of some kind.