AGI is not coming! - Yanick Kilcher
Submitted 13 hours ago by cm0002@lemmy.world to technology@lemmy.zip
https://www.youtube.com/watch?v=hkAH7-u7t5k
Submitted 13 hours ago by cm0002@lemmy.world to technology@lemmy.zip
https://www.youtube.com/watch?v=hkAH7-u7t5k
Gradually_Adjusting@lemmy.world 13 hours ago
AGI can’t come from these LLMs because they are non-sensing, stationary, and fundamentally not thinking at all.
AGI might be coming down the pipe, but not from these LLM vendors. I hope a player like Numenta, or any other nonprofit, open-source initiative manages to create AGI so that it can be a positive force in the world, rather than a corporate upward wealth transfer like most tech.
rollin@piefed.social 12 hours ago
I don't follow, why would a machine need to be able to move or have its own sensors in order to be AGI? And can you define what you mean by "thinking"?
Gradually_Adjusting@lemmy.world 12 hours ago
The argument is best made by Jeff Hawkins in his Thousand Brains book. I’ll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins’ argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.
We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins’ theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.
The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I’ve heard so far.
I’d better stop there and see if you care to tolerate more of this sort of blather. I hope I’ve given you something to sink your teeth into, at any rate.