Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
XLE@piefed.social 15 hours agoAnthropic’s “ethical” concerns were performative. They only fearmonger about fictional things that will make their product sound powerful (read: worth throwing money into).
They try to scare people with fictional stories of AGI, a thing that isn’t happening, while ignoring widespread CSAM and sexual harassment generation, a thing that is happening.
Iconoclast@feddit.uk 15 hours ago
Are we not moving toward AGI? Because from where I stand, I only see three scenarios: either AI research is going backwards, no progress is being made whatsoever, or we’re continuing to improve our systems incrementally - inevitably moving toward AGI. Unless, ofcourse, you think we’ll never going to reach it which I view as a quite insane claim in itself.
If we’re not moving toward it, then I’d love to hear your explanation for why we’re moving backwards or not making any progress at all.
Whether we’re 5 or 500 years away from AGI is completely irrelevant to the people who worry about it. It’s not the speed of the progress - it’s the trajectory of it.
XLE@piefed.social 14 hours ago
We are not “moving towards AGI” in any way with any modern technology, in the same way that we are not “moving towards FTL travel” because a car company added cylinders to an engine.
The real “AI” dangers are people like Eli Yudkowski, a man who scares vulnerable people, sexually abuses them, and has spawned at least one murderous cult.
Iconoclast@feddit.uk 14 hours ago
So that means you believe AI research is completely frozen still or moving backwards. Please explain.
Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.
XLE@piefed.social 14 hours ago
According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?
Anthropic is not special. They’re doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn’t chime in, it’s your job to show they’ll lead to AGI, it’s your job to show us how, not my job to show you it won’t.