Comment on I'm looking for an article showing that LLMs don't know how they work internally

<- View Parent
theunknownmuncher@lemmy.world ⁨5⁩ ⁨days⁩ ago

You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.

source
Sort:hotnewtop