Comment on AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study Finds
very_well_lost@lemmy.world 3 days agoIt refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
I think this still gives the model too much credit by implying that there’s any sort of intentionally behind this behavior.
There’s not.
These models are trained on the output of real humans and real humans lie and deceive constantly. All that’s happening is that the underlying mathematical model has encoded the statistical likelihood that someone will lie in a given situation. If that statistical likelihood is high enough, the model itself will lie when put in a similar situation.
MentalEdge@sopuli.xyz 3 days ago
Obviusly.
And like hallucinations, it’s undesired behavior that proponents off LLMs will need to “fix”.
But how you use words to explain the phenomenon?
very_well_lost@lemmy.world 3 days ago
I don’t know, I’ve been struggling to find the right ‘sound bite’ for it myself. The problem is that all of the simplified expansions encourage people to anthropomorphize these things, which just further fuels the toxic type cycle.
In the end, I’m unsure which does more damage.
Is it better to convince people the AI “lies”, so they’ll stop using it? Or is it better to convince people AI doesn’t actually have the capacity to lie so that they’ll stop investing will stop shoveling money into the datacenter altar like we’ve just created some bullshit techno-god
zarkanian@sh.itjust.works 3 days ago
Except that “hallucinate” is a terrible term. A hallucination is when your senses report something that isn’t true.
MentalEdge@sopuli.xyz 3 days ago
Yes.
Who are you trying to convince?
zarkanian@sh.itjust.works 3 days ago
The interface makes it appear that the AI is sapient. You talk to it like a human being, and it responds like a human being. Like you said, it might be impossible to avoid ascribing things like intentionality to it, since it’s so good at imitating people.
It may very well be a stepping-stone to AGI. It may not. Nobody knows. So, of course we shouldn’t assume that it is.
I don’t think that “hallucinate” is a good term regardless. Not because it makes AI appear sapient, but because it’s inaccurate whether the AI is sapient or not.