The architecture of these LRMs may make monkeys fly out of my butt. It hasn’t been proven that the architecture doesn’t allow it.
You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can’t.
The architecture of these LRMs may make monkeys fly out of my butt. It hasn’t been proven that the architecture doesn’t allow it.
You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can’t.
communist@lemmy.frozeninferno.xyz 2 weeks ago
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
Knock_Knock_Lemmy_In@lemmy.world 2 weeks ago
This paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should.
The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.
communist@lemmy.frozeninferno.xyz 2 weeks ago
It does need to do that to meaninfully change anything, however.
Knock_Knock_Lemmy_In@lemmy.world 2 weeks ago
Other way around. The claimed meaningful change (reasoning) has not occurred.