I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:
The JAMA study found that 12.5% of ChatGPT’s responses were “hallucinated,” and that the chatbot was most likely to present incorrect information when asked about localized treatment for advanced diseases or immunotherapy.
zeppo@lemmy.world 1 year ago
That’s useful. It’s also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.