Comment on ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans

<- View Parent
jvisick@programming.dev ⁨1⁩ ⁨year⁩ ago

I don’t think it’s good enough to have a blanket conception to not trust them completely.

On the other hand, I actually think we should, as a rule, not trust the output of an LLM.

They’re great for generative purposes, but I don’t think there’s a single valid case where the accuracy of their response should be outright trusted. Any information you get from an AI model should be validated outright.

There are many cases where a simple once-over from a human is good enough, but any time it tells you something you didn’t already know you should not trust it and, if you want to rely on that information, you should validate that it’s accurate.

source
Sort:hotnewtop