Comment on spicy one
jsomae@lemmy.ml 1 day agoNot to mention skynet. It always bothers me when people leave AI out of lists of x-risks. I guess it’s because a popular sci-fi movie predicted it would happen, so nobody takes it seriously. Or perhaps it’s just because AI is so unpopular now, nobody wants to devote any time to thinking about the ramifications of it becoming smarter.
uriel238@lemmy.blahaj.zone 1 day ago
Image
Courtesy of XKCD, long before we have to contend with unfriendly AI (we have committees of AI-techs working on this problem already) we’ll have to contend with someone like Musk or Bezos determined to own everything and capable of creating an AI-controlled army of killer robots.
We’re not sure how rogue AI is going to manifest. We are sure rogue power-seeking humans exist all the time, and positions of power are commonly filled by them. (That’s the primary argument for election by sortition, or by lottery.)
jsomae@lemmy.ml 1 day ago
Ok, so to be clear, you’re saying that AI x-risk is already partially or even mostly bundled under “We end stratified society and power disparity, or we die.”?
uriel238@lemmy.blahaj.zone 18 hours ago
That is a good assessment. Yes.
In fact, the race between capitalist interests to bypass safety and get AI soonest is entirely about getting that power to be able to use it to hold everyone else hostage.
jsomae@lemmy.ml 14 hours ago
Yeah, fair enough, I do agree that this is largely driven by capitalism, and if we didn’t have a capitalist society we would hopefully be going about this more cautiously. Still, I feel like it’s a unique enough situation that I would consider it its own x-risk.
asdfranger@lemmynsfw.com 1 day ago
ultrakill.wiki.gg/wiki/Lore