sp3ctr4l@lemmy.dbzer0.com 2 weeks ago
This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning has arisen as some kind of emergent conplexity phenomenon… not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
riskable@programming.dev 2 weeks ago
Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”
LLMs are “reasoning”… They’re just not doing human-like reasoning.
sp3ctr4l@lemmy.dbzer0.com 2 weeks ago
Howabout uh…
The ability to take a previously given set of knowledge, experiences and concepts, and combine then in a consistent, non contradictory manner, to generate hitherto unrealized knowledge, or concepts, and then also be able to verify that those new knowledge and concepts are actually new, and actually valid, or at least be able to propose how one could test whether or not they are valid.
Arguably this is or involves meta-cognition, but that is what I would say… is the difference between what we typically think of as ‘machine reasoning’, and ‘human reasoning’.