Comment on I'm looking for an article showing that LLMs don't know how they work internally

<- View Parent
theunknownmuncher@lemmy.world ⁨1⁩ ⁨week⁩ ago

I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.

Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”

That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.

source
Sort:hotnewtop