It’s not merely a preconception. It’s a rather obvious and well-known limitation of these systems. What I am decrying is that some people, from apparent ignorance, think things like “ChatGPT can give a reliable cancer treatment plan!” or “here, I’ll have it write a legal brief and not even check it for accuracy”. But sure, I agree with you, minus the needless sarcasm. It’s useful to prove or disprove even absurd hypotheses.
adeoxymus@lemmy.world 1 year ago
I’d say that a measurement always trumps arguments. At least you know how accurate they are, this statement cannot follow from reason:
zeppo@lemmy.world 1 year ago
That’s useful. It’s also good to note that the information the agent can relay depends heavily on the data used to train the model, so it could change.