MoE
Comment on Leading AI models fail new test of artificial general intelligence
nectar45@lemmy.zip 5 days ago
The better an ai is at logic the less creative it often becomes, the more creative an ai gets the worse it gets at accurately recalling knowledge and the better an ai gets at knowledge the more it flounders at thinking critically and logically instead of just lazily reciting its knowledge perfectly back at you.
Until ai researches find a way to solve this rock-paper-scissors of constant self sabotage ai cant advance to the next phase.
communist@lemmy.frozeninferno.xyz 5 days ago
spankmonkey@lemmy.world 5 days ago
This is because AI is not aware of context due to not being intelligent.
What is called creative is really just randomization within the constraints of the design. That reduces accuracy, because of the randomization. If the ‘creativity’ is reduced, it becomes more accurate because it is no longer adding changes.
Using words like creativity, self sabotage, hallucinations, etc. all make it seem like AI is far more advanced than it actually is.
nectar45@lemmy.zip 5 days ago
I know I am anthropormizing it too much but the fact the current design cant even increase this super basic creativity without messing itself up in the process is a massive problem in the design, the ai cant seem to understand when to be “creative” and when not to, when to attempt to solve a probme through recalling data abd when not to showing its far less aware than a person is to a very basic level
spankmonkey@lemmy.world 5 days ago
Yes, the tradeoff between constrained randomization and accurately vomiting back the information it was fed is going to be difficult as long as it it designed to be ingeracted with as if it was a human who can know the difference.
It could be handled by having clearly defined ways of conveying whether the user wants factual or randomized output, but that would shatter the veneer of being intelligent.
nectar45@lemmy.zip 5 days ago
It probably needs a secondary “brain lobe” that is responsible for figuring out what the user wants and adjusting the nodes accordingly…abd said lobe needs to have long term memory…but then the problem of THAT is how it will make the ai a lot slower and it can glitch hard.
Ai research is hard
Eranziel@lemmy.world 5 days ago
Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.
Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.