Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.

<- View Parent
vrighter@discuss.tchncs.de ⁨15⁩ ⁨hours⁩ ago

the probabilities are also fixed after training. You seem to be conflating running the llm with different input to the model somehow adapting. The new context goes into the same fixed model. And yes, it can be reduced to fixed transition logic, you just need to have all possible token combinations in the table. This is obviously intractable due to space issues, so we came up with a lossy compression scheme for it. The table itself is learned once, then it’s fixed. The training goes to generateling a huge markov chain. Just because the ta ble is learned from data, doesn’t change what it actually is.

source
Sort:hotnewtop