AGI talk seems for now to be merely hype to get investors.
LLMs seem likely to be dead end for any logical thought: forbes.com/…/intelligence-illusion-what-apples-ai… This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: fortune.com/…/mit-report-95-percent-generative-ai…
There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: futurism.com/…/experts-concerned-ai-progress-wall
Iconoclast@feddit.uk 10 hours ago
We could’ve never invented LLMs and I’d still be equally worried about AGI. I’ve been talking about it since 2016 or so - LLMs aren’t the motivation for that worry, since nobody had even heard of them back then.
The timescale is also irrelevant here. I’m not less worried even if we’re 500 years away from it. How close to Earth does the asteroid need to get before it’s acceptable to start worrying about it?