Yeah, on further thought and as I mention in other replies, my thoughts on this are shifting toward the real bug of this being how it’s marketed in many cases (as a digital assistant/research aid) and in turn used, or attempted to be used (as it’s marketed).
Comment on We have to stop ignoring AI’s hallucination problem
Danksy@lemmy.world 9 months agoIt’s not a bug, it’s a natural consequence of the methodology. A language model won’t always be correct when it doesn’t know what it is saying.
ALostInquirer@lemm.ee 9 months ago
vrighter@discuss.tchncs.de 9 months ago
it never knows what it’s saying
TheDarksteel94@sopuli.xyz 9 months ago
Oh, at some point it will lol
Danksy@lemmy.world 9 months ago
That was what I was trying to say, I can see that the wording is ambiguous.