They’re really doubling down on this narrative of “this technology we’re making is going to kill us all, it’s that awesome, come on guys use it more”
Comment on AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study Finds
NachBarcelona@piefed.social 3 days ago
AI isn’t scheming because AI cannot scheme. Why the fuck does such an idiotic title even exist?
echodot@feddit.uk 3 days ago
faint_marble_noise@programming.dev 2 days ago
The narrative is a little more nuanced and is being built slowly to be more believable and less obvious. They are trying to convince everybody that AI is powerful technology, which means that it is worth to develop, but also comes with serious risks. Therefore, only established corps with experience and processes in AI development can handle it. Regulation abd certification follows, making it almost impossible for startups and OSS to enter the scene and compete.
Cybersteel@lemmy.world 3 days ago
But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.
Paradise Lost.
Passerby6497@lemmy.world 3 days ago
Lol, the AI can barely remember the directives I tell it about basic coding practices, I’m not concerned that the clanker can remember me shit talking it.
T156@lemmy.world 2 days ago
Plus people are mean all the time. We don’t live in a comic book world, where a moment of fury at someone on the internet turns people into supervillains.
MentalEdge@sopuli.xyz 3 days ago
Seems like it’s a technical term, a bit like “hallucination”.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
There’s hallucination, when a model “genuinely” claims something untrue is true.
This is about how a model might lie, even though the “chain of thought” shows it “knows” better.
atrielienz@lemmy.world 3 days ago
I agree with you in general, I think the problem is that people who do understand Gen AI (and who understand what it is and isn’t capable of why), get rationally angry when it’s humanized by using words like these to describe what it’s doing.
The reason they get angry is because this makes people who do believe in the “intelligence/sapience” of AI more secure in their belief set and harder to talk to in a meaningful way. It enables them to keep up the fantasy. Which of course helps the corps pushing it.
MentalEdge@sopuli.xyz 3 days ago
Yup. The way the article titled itself isn’t helping.
very_well_lost@lemmy.world 3 days ago
I think this still gives the model too much credit by implying that there’s any sort of intentionally behind this behavior.
There’s not.
These models are trained on the output of real humans and real humans lie and deceive constantly. All that’s happening is that the underlying mathematical model has encoded the statistical likelihood that someone will lie in a given situation. If that statistical likelihood is high enough, the model itself will lie when put in a similar situation.
MentalEdge@sopuli.xyz 3 days ago
Obviusly.
And like hallucinations, it’s undesired behavior that proponents off LLMs will need to “fix”.
But how you use words to explain the phenomenon?
very_well_lost@lemmy.world 3 days ago
I don’t know, I’ve been struggling to find the right ‘sound bite’ for it myself. The problem is that all of the simplified expansions encourage people to anthropomorphize these things, which just further fuels the toxic type cycle.
In the end, I’m unsure which does more damage.
Is it better to convince people the AI “lies”, so they’ll stop using it? Or is it better to convince people AI doesn’t actually have the capacity to lie so that they’ll stop investing will stop shoveling money into the datacenter altar like we’ve just created some bullshit techno-god
zarkanian@sh.itjust.works 3 days ago
Except that “hallucinate” is a terrible term. A hallucination is when your senses report something that isn’t true.