Comment on Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
Iconoclast@feddit.uk 17 hours agoWe are not “moving towards AGI” in any way with any modern technology
So that means you believe AI research is completely frozen still or moving backwards. Please explain.
Comparisons to faster-than-light travel are completely disingenuous and bad faith - that would break the laws of physics and you know it.
XLE@piefed.social 17 hours ago
According to Dario Amodei, this is the year we are getting New Science. And apparently he believes in Dyson Spheres too. How do we feel about that?
Anthropic is not special. They’re doing the LLM thing like everybody else. The Godfather of AI, Yann LeCun himself, said LLMs were a dead end on this front. But even if he didn’t chime in, it’s your job to show they’ll lead to AGI, it’s your job to show us how, not my job to show you it won’t.
Iconoclast@feddit.uk 16 hours ago
If you’re just gonna keep ignoring every single point I make and keep rambling about unrelated shit, then there’s nothing left to discuss here. If you actually had an argument, you would’ve made it by now.
XLE@piefed.social 16 hours ago
Your claim: AI seems to be getting better, therefore AGI will happen
My rebuttal: they aren’t linked
Does that clear matters up?
Iconoclast@feddit.uk 16 hours ago
My argument is that we’ll incrementally keep improving our technology like we have done throughout human history. Assuming that general intelligence is not substrate dependent - meaning that what our brains are doing cannot be replicated in silicon - or that we destroy ourselves before we get there, then it’s just a matter of time before we create a system that’s as intelligent as we are: AGI.
I already said that the timescale doesn’t matter here. It could take a hundred years or two thousand - doesn’t matter. We’re still moving toward it. It does not matter how slow you move. As long as you keep moving, you’ll eventually reach your destination.
So, how I see it is that if we never end up creating AGI ever, it’s either because we destroyed ourselves before we got there or there’s something borderline supernatural about the human brain that makes it impossible to copy in silicon.