Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.

<- View Parent
Knock_Knock_Lemmy_In@lemmy.world ⁨20⁩ ⁨hours⁩ ago

When given explicit instructions to follow models failed because they had not seen similar instructions before.

This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

source
Sort:hotnewtop