Comment on Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better
0ops@lemm.ee 11 months agoMy hypothesis is that that “extra juice” is going to be some kind of body. More senses than text-input, and more ways to manipulate itself and the environment than text-output. Basically, right now llm’s can kind of understand things in terms of text descriptions, but will never be able to understand it the way a human can until it has all of the senses (and arguably physical capabilities) that a human does. Thought experiment: can you describe your dog without sensory details, directly or indirectly? Behavior had to be observed somehow. Time is a sense too.
LinuxSBC@lemm.ee 11 months ago
First, they do have senses. For example, many LLMs can “see” images. Second, they’re actually pretty good at describing things. What they’re really bad at is analysis and logic, which is not related to senses at all.
0ops@lemm.ee 11 months ago
I’m not so convinced that logic is completely unrelated to the senses. How did you learn to count, add, and subtract mentally? You used your fingers. I don’t know about you, but even though I don’t count my fingers anymore I still tend to “visualize” math operations. Would I be capable of that if I were born blind? Maybe I’d figure out how to do the same thing in a different dimension of awareness, but I have no doubt that being able to conceptualize visually helps my own logic. As for more complicated math, I can’t do that mentally either, I need a calculator and/or scratch paper. Maybe analogues to those can be implemented into the model? Maybe someone should just train a model on khan academy videos, and it’ll pick this stuff up emergency? I’m not saying that the ability to visualize is the only roadblock though, I’m sure that improvements could be made to the models themselves, but I bet that it’ll be key to human-like reasoning
LinuxSBC@lemm.ee 11 months ago
That’s a good point.