I warn about AI. I don’t care about AGI (yet) because we are far from it.
I’m worried about (in no particular order):
- Software companies amassing technical debt because AI-generated code gets used without proper review
- Massive security problems in critical infrastructure, for the exact same reason
- Cost savings being used to make the rich richer while the people who used to do the work are just fired
- Companies forcing AI into every single product even if it doesn’t make sense, just to make their shareholders happy
- Rapidly increasing prices of RAM, SSDs, HDDs, graphics cards and consequently pretty much all electronic devices
- The environmental impact because companies would rather build new power plants than optimize AI for efficiency
- A lack of education about the limitations of current implementations. People tend to feed every question they have into ChatGPT and trust the results even when they’re completely incorrect
- The inherent privacy nightmare that comes from funneling that much data into a centralized service
Nothing about this is small or cute.
I would be totally fine with something that can run locally on my laptop without cooking it and doubling my energy bill. Also an economy where productivity gains benefit the workers, not the CEO. If I can do the same work in half the time, let me have the rest of the day off at full pay instead of firing half your staff.
Iconoclast@feddit.uk 23 hours ago
Compared to AGI it is. We don’t know how far away we are from creating it. We can only speculate.
dfyx@lemmy.helios42.de 23 hours ago
The same way the Hiroshima and Nagasaki nuclear bombs are small and cute compared to a modern hydrogen bomb…
If we don’t solve the AI problems we already have, there is no point speculating about AGI because our lives will be unbearable long before it arrives.
ell1e@leminal.space 14 hours ago
AGI talk seems for now to be merely hype to get investors.
LLMs seem likely to be dead end for any logical thought: forbes.com/…/intelligence-illusion-what-apples-ai… This means at the end of the day you just get a sloppy illusion with no useful coherence as soon as it exceeds the complexity of a literal lazy copy&paste job: fortune.com/…/mit-report-95-percent-generative-ai…
There is currently no technological innovation to fix this. Instead, AI progress seems to be stalling: futurism.com/…/experts-concerned-ai-progress-wall
Iconoclast@feddit.uk 7 hours ago
We could’ve never invented LLMs and I’d still be equally worried about AGI. I’ve been talking about it since 2016 or so - LLMs aren’t the motivation for that worry, since nobody had even heard of them back then.
The timescale is also irrelevant here. I’m not less worried even if we’re 500 years away from it. How close to Earth does the asteroid need to get before it’s acceptable to start worrying about it?