little_hermit@lemmus.org 1 year ago
If you’re wondering how AI wipes us out, you’d have to consider humanity’s tendency to adopt any advantage offered in warfare. Nations are in perpetual distrust of each other – an evolutionary characteristic of our tribal brains. The other side is always plotting to dominate you, take your patch of dirt. Your very survival depends on outpacing them! You dip your foot in the water, add AI to this weapons system, or that self-driving tank. But look, the other side is doing the same thing. You train even larger models, give them more control of your arsenal. But look, the other side is doing even more! You develop evermore sophisticated AI models; your very survival depends on it! And then, one day, your AI model is so sophisticated, it becomes self aware…and you wonder where did it all go wrong.
Salamendacious@lemmy.world 1 year ago
So you’re basically scared of skynet?
jarfil@lemmy.world 1 year ago
They went a bit too far with the argument… the AI doesn’t need to become self-aware, just exceptionally efficient at eradicating “the enemy”… and just let it loose from all sides all at once.
How many people are there in the world, who aren’t considered an “enemy” by at least someone else out there?
Salamendacious@lemmy.world 1 year ago
So you’re scared of skynet light?
jarfil@lemmy.world 1 year ago
“Scared” is a strong word… more like “curious”, to see how it goes. I’m mostly waiting for the “autonomous rifle dog fails” videos, hoping to not be part of them.
TwilightVulpine@lemmy.world 1 year ago
Only if human military leaders are stupid enough to give AI free and unlimited access to weaponry, rather than just using it as an advisory tool and making the calls themselves.
jarfil@lemmy.world 1 year ago
Part of the reason of “adding AI” to everything, “dumb AI”, is to reduce reaction times and increase obedience rate. Meaning, to cut the human out of the loop.
It’s being sold as a “smart” move.
little_hermit@lemmus.org 1 year ago
Don’t be ridiculous, time travel is impossible.
T00l_shed@lemmy.world 1 year ago
Maybe AI will figure it out 😆
FarceOfWill@infosec.pub 1 year ago
I’m scared of Second Variety
Salamendacious@lemmy.world 1 year ago
If an AI were to gain sentience, basically becoming an AGI, then I think it’s probably that it would develop an ethical system independent of its programming and be able make moral decisions. Such as murder is wrong. Fiction deals with killer robots all the time because fiction is a narrative and narratives work best with both a protagonist and an antagonist. Very few people in the real world have an antagonist who actively works against them. Don’t let fiction influence your thinking too much it’s just words written by someone. It isn’t a crystal ball.
TwilightVulpine@lemmy.world 1 year ago
I wouldn’t take AI developing morality as a given. Not only an AGI would be a fundamentally different form of existence that wouldn’t necessarily treat us as peers, even if it takes us as reference, human morality is also full of exceptionalism and excuses for terrible actions. It wouldn’t be hard for an AGI to consider itself superior and our lives inconsequential.
But there is little point in speculating about that when the limited AI that we have is already threatening people’s livelihoods right now, even just by being used as a tool.
FarceOfWill@infosec.pub 1 year ago
You realise those robots were made by humans to win a war? That’s the trick, the danger is humans using ai or trusting it. Not skynet or other fantasies.