Comment on AI companies are violating a basic social contract of the web and and ignoring robots.txt
masonlee@lemmy.world 8 months agoI request sources :)
Comment on AI companies are violating a basic social contract of the web and and ignoring robots.txt
masonlee@lemmy.world 8 months agoI request sources :)
lunarul@lemmy.world 8 months ago
www.lifewire.com/strong-ai-vs-weak-ai-7508012
…wikipedia.org/…/Artificial_general_intelligence
Boucher, Philip (March 2019). How artificial intelligence works
www.itu.int/en/journal/001/…/itu2018-9.pdf
masonlee@lemmy.world 8 months ago
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
conciselyverbose@kbin.social 8 months ago
This is like saying putting logs on a fire is "one or two breakthroughs away" from nuclear fusion.
LLMs do not have anything in common with intelligence. They do not resemble intelligence. There is no path from that nonsense to intelligence. It's a dead end, and a bad one.
lunarul@lemmy.world 8 months ago
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
masonlee@lemmy.world 8 months ago
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!