Comment on 95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
sqgl@sh.itjust.works 17 hours agoThis comment, summing up the author’s own admission shows AI can’t reason:
this new result was just a matter of search and permutation and not discovery of new mathematics.
REDACTED@infosec.pub 16 hours ago
I never said it discovered new mathematics, I implied it can reason. This is clear example of reasoning to solve a problem
xektop@lemmy.zip 15 hours ago
You need to dig deeper of how that “reasoning” works, but you got misled if you think it does what you say it does.
REDACTED@infosec.pub 15 hours ago
Can you elaborate? How is this not reasoning? Define reasoning to me
NoMoreCocaine@lemmy.world 14 hours ago
While that contains the word “reasoning” that does not make it such. If this is about the new “reasoning” capabilities of the new LLMS. It was if I recall correctly, found our that it’s not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it’s doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it’s not actually “reasoning”, it’s just applying another pattern.
With the current technology we’ve gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we’ll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we’re just using the same approach as we were before we tried to do “handcrafted” AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we’ll get pretty convincing results, but I seriously doubt we’ll get proper reasoning with this current approach.