Comment on What is a good eli5 analogy for GenAI not "knowing" what they say?
skillissuer@discuss.tchncs.de 6 months ago
it’s a spicy autocomplete. it doesn’t know anything, does not understand anything, it does not reason, and it won’t stop until your boss thinks it’s good enough at your job for “restructuring” (it’s not). any illusion of knowledge comes from the fact that its source material mostly is factual. when you’re drifting off into niche topics or something that was missing out of training data entirely, spicy autocomplete does what it does best, it makes shit up. some people call this hallucination, but it’s closer to making shit up confidently while not knowing any better. humans do that too, but at least they know when they do that
Drummyralf@lemmy.world 6 months ago
Hmm, now that I read this, I have a thought: it might also be hard to wrap our heads around this issue because we all talk about AI as if it is an entity. Even the sentence “it makes shit up” gives AI some kind of credit that it “thinks” about things. It doesn’t make shit up, it is doing exactly what it is programmed to do: create good sentences. It succeeds.
Maybe the answer is just to stop talking about AI’s as “saying” things, and start talking about GenAI as “generating sentences”? That way, we emotionally distance ourselves from “it” and it’s more difficult to ascribe consciousness to an AI.