Comment on Anthropic's Claude 4 could "blackmail" you in extreme situations

<- View Parent
skulblaka@sh.itjust.works ⁨1⁩ ⁨week⁩ ago

That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.

source
Sort:hotnewtop