Comment on ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans

<- View Parent
adeoxymus@lemmy.world ⁨1⁩ ⁨year⁩ ago

I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:

The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.

source
Sort:hotnewtop