Comment on Emergent introspective awareness in large language models

<- View Parent
kromem@lemmy.world ⁨3⁩ ⁨days⁩ ago

So while your understanding is better than a lot of people on here, a few things to correct.

First off, this research isn’t being done on the models in reasoning mode, but in direct inference. So there’s no CoT tokens at all.

The injection is not of any tokens, but of control vectors. Basically it’s a vector which being added to the activations makes the model more likely to think of that concept. The most famous was “Golden Gate Claude” that had the activation for the Golden Gate Bridge increased so it was the only thing the model would talk about.

So, if we dive into the details a bit more…

If your theory was correct, then the way the research asks the question saying that there’s control vectors and they are testing if they are activated, then the model should be biased to sometimes say “yes, I can feel the control vector.” And yes, in older or base models that’s what we might expect to see.

But, in Opus 4/4.1, when the vector was not added, they said they could detect a vector… 0% of the time! So the control group had enough introspection capability as to not stochastically answer that there was a vector present when there wasn’t.

But then, when they added the vector at certain layer depths, the model was often able to detect that there was a vector activated, and further to guess what the vector was adding.

So again — no reasoning tokens present, and the experiment had control and experimental groups where the results negates your theory as to the premise of the question causing affirmative bias.

Again, the actual research is right there a click away, and given your baseline understanding at present, you might benefit and learn a lot from actually reading it.

source
Sort:hotnewtop