The paper doesn’t say LLMs can’t reason
Authors gotta get paid. This article is full of pseudo-scientific jargon.
The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.
The paper doesn’t say LLMs can’t reason
Authors gotta get paid. This article is full of pseudo-scientific jargon.
spankmonkey@lemmy.world 2 weeks ago
I agree with the author.
The fact that they only work up to a certain point despite increased resources is proof that they are just pattern matching, not reasoning.
auraithx@lemmy.dbzer0.com 2 weeks ago
Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.
technocrit@lemmy.dbzer0.com 2 weeks ago
Performance collapses because luck runs out. Bigger destruction of the planet won’t fix that.
auraithx@lemmy.dbzer0.com 2 weeks ago
Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn’t stop what’s coming for us.