True, but that also depends on the circumstance.
Again, a lot of people just use LLMs now as their primary search engine. Google is an afterthought, ChatGPT is their source of choice. If they ask a simple question with legal or medical implications, with tons of sources, that the LLM answers with identical accuracy to those other publications, should they be sued?
I think it would be a lot better to allow people to sue if it provides false advice that ends up causing some material harm, because at the end of the day, a lot of stuff can be considered “medical.”
Maybe a trans person asks what gender affirming care is. Is that medical? I’d say it is. Should that not get discussed through an LLM if a person wants to ask it?
I’m not saying I wholeheartedly oppose this idea of banning them from giving this type of advice, but I do think there are a lot of concerns around just how many people this would actually benefit vs just cutting people off from information they might not bother to look up elsewhere, or worse, just go to less reputable, more fringe sites with less safeguards and less accountability instead.
frongt@lemmy.zip 2 days ago
Which to be fair is not any different from a lawyer. They’re not perfect either.
The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?
(Obviously, whoever is running it, but so far that hasn’t been established in court.)