Dramatic advances in artificial intelligence over the past decade (for narrow-purpose AI) and the last several years (for general-purpose AI) have transformed AI from a niche academic field to the core business strategy of many of the world’s largest companies, with hundreds of billions of dollars in annual investment in the techniques and technologies for advancing AI’s capabilities.
We now come to a critical juncture. As the capabilities of new AI systems begin to match and exceed those of humans across many cognitive domains, humanity must decide: how far do we go, and in what direction?
AI, like every technology, started with the goal of improving things for its creator. But our current trajectory, and implicit choice, is an unchecked race toward ever-more powerful systems, driven by economic incentives of a few huge technology companies seeking to automate large swathes of current economic activity and human labor. If this race continues much longer, there is an inevitable winner: AI itself – a faster, smarter, cheaper alternative to people in our economy, our thinking, our decisions, and eventually in control of our civilization.
But we can make another choice: via our governments, we can take control of the AI development process to impose clear limits, lines we won’t cross, and things we simply won’t do – as we have for nuclear technologies, weapons of mass destruction, space weapons, environmentally destructive processes, the bioengineering of humans, and eugenics. Most importantly, we can ensure that AI remains a tool to empower humans, rather than a new species that replaces and eventually supplants us.
This essay argues that we should keep the future human by closing the “gates” to smarter-than-human, autonomous, general-purpose AI – sometimes called “AGI” – and especially to the highly-superhuman version sometimes called “superintelligence.” Instead, we should focus on powerful, trustworthy AI tools that can empower individuals and transformatively improve human societies’ abilities to do what they do best. The structure of this argument follows in brief.
Can you replace politicians I feel like that would actually be an improvement. Hell it’d probably be an improvement if the current system’s replaced politicians.
To be honest though I’ve never seen any evidence that AGI is inevitable, it’s perpetually 6 months away except in 6 months it’ll still be 6 months away.
lectricleopard@lemmy.world 1 week ago
Where are these AGI systems? All I hear about is LLMs fooling execs, which is basically them just falling for a fast talking computer.
cecilkorik@lemmy.ca 1 week ago
Fooling people is evidently all you need to do to become President of the United States and Commander In Chief of the world’s largest military with personal control over a massive stockpile of nuclear weapons. Fast talking computers could be dangerous when they’re infinitely faster, and probably smarter and slightly less neurotic than the current president. “Hey, come to think of it, has anyone ever even seen the 2028 president-elect on anything other than a screen?”
lectricleopard@lemmy.world 1 week ago
You’re missing my point. An LLM can’t be “smarter” or “smart” at all. It isn’t sentient or conscious. It doesn’t even have a stable internal model of the world. What people call “hallucinations” are simply the random words selected at the edges of its convincing predictive power. Personification of LLMs is all marketing. They’re really just well presented statistical models.