Comment on Emergent introspective awareness in large language models

kromem@lemmy.world ⁨4⁩ ⁨days⁩ ago

I tend to see a lot of discussion taking place on here that’s pretty out of touch with the present state of things, echoing earlier beliefs about LLM limitations like “they only predict the next token” and other things that have already been falsified.

This most recent research from Anthropic confirms a lot of things that have been shifting in the most recent generation of models in ways that many here might find unexpected, especially given the popular assumptions.

Specifically interesting are the emergent capabilities of being self-aware of injected control vectors or being able to silently think of a concept so it triggers the appropriate feature vectors even though it isn’t actually ending up in the tokens.

source
Sort:hotnewtop