simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions.
Yes, this shit is very basic. Not at all “intelligent.”
But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have any relevance to the topic of AI or human reasoning and problem solving.
As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.
simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions.
Yes, this shit is very basic. Not at all “intelligent.”
But reasoning about it is intelligent, and the point of this study is to determine the extent to which these models are reasoning or not. Which again, has nothing to do with emotions. And furthermore, my initial question about whether or not pattern following should automatically be disqualified as intelligence, as the person summarizing this study (and notably not the study itself) claims, is the real question here.
MCasq_qsaCJ_234@lemmy.zip 10 hours ago
If an AI is trained to do this, it will be very good, like for example when a GPT-2 was trained to multiply numbers up to 20 digits.
nitter.net/yuntiandeng/…/1836114419480166585#m
Here they do the same test to GPT-4o, o1-mini and o3-mini
nitter.net/yuntiandeng/…/1836114401213989366#m
nitter.net/yuntiandeng/…/1889704768135905332#m