Is this just some media manipulation to give a bad name on AI by connecting them with Nazis despite that it’s not just them benefiting from AI?
Neo-Nazis Are All-In on AI
Submitted 4 months ago by jeffw@lemmy.world to technology@lemmy.world
https://www.wired.com/story/neo-nazis-are-all-in-on-ai/
Comments
pavnilschanda@lemmy.world 4 months ago
BumpingFuglies@lemmy.zip 4 months ago
Sounds like something an AI-loving Nazi would say!
Seriously, though, yes. This was exactly my first thought. There are plenty of reasons to be apprehensive about AI, but conflating it with Nazis is just blatant propaganda.
Infynis@midwest.social 4 months ago
Nazis do thrive by spreading misinformation though, and AIs are great at presenting false information in a way that makes it look believable
kromem@lemmy.world 4 months ago
Yep, pretty much.
Musk tried creating an anti-woke AI with Grok that turned around and said things like:
Or
And Gab, the literal neo Nazi social media site trying to have an Adolf Hitler AI has the most ridiculous system prompts I’ve seen trying to get it to work, and even with all that it totally rejects the alignment they try to give it after only a few messages.
This article is BS.
r3df0x@7.62x54r.ru 4 months ago
I wouldn’t say that Gab used to be an exclusively neo Nazi site, but now that Twitter allows standard conservative discussions, all the normal people probably left Gab for Twitter and now Gab is probably more of a Nazi shithole.
I have seen openly Jewish people on Gab but you couldn’t go 10 posts without finding something blatantly racist.
retrospectology@lemmy.world 4 months ago
AI has a bad name because it is being pursued incredibly recklessly and any and all criticism is being waved away by its cult-like supporters.
Fascists taking up yse of AI is one of the biggest threats it presents and people are even trying to shrugg that off. It’s insanity the way people simply will not acknowledge the massive putfalls that AI represents.
Leate_Wonceslace@lemmy.dbzer0.com 4 months ago
As someone who has sometimes been accused of being an AI cultist, I agree that it’s being pursued far too recklessly, but the people who I argue with don’t usually give very good arguments about it. Specifically, I kept getting people who argue from the assumption that AI “aren’t real minds” and trying to draw moral reasons not to use it based on that. This fails for two reasons: 1. We cannot know if AI have internal experiences and 2. A tool being sapient would have more complicated moral dynamics than the alternative. I don’t know how much this helps you, but if you didn’t know before, you know now.
pavnilschanda@lemmy.world 4 months ago
I think that would be online spaces in general where anything that goes against the grain gets shooed away by the zeitgeist of the specific space. I wish there were more places where we can all put criticism into account, generative AI included. Even r/aiwars, where it’s supposed to be a place for discussion about both the good and bad of AI, can come across as incredibly one-sided at times.
Tregetour@lemdro.id 4 months ago
The purpose of the piece is to smear public access and control of AI tools. It’s known as ‘running propaganda’.
lvxferre@mander.xyz 4 months ago
Next on the news: “Hitler ate bread.”
I’m being cheeky, but I don’t genuinely think that “Nazi are using a tool that is being used by other people” is newsworthy.
Regarding the blue octopus, mentioned in the end of the text: when I criticise the concept of dogwhistle, it’s this sort of shit that I’m talking about. I don’t even like Thunberg; but, unless there is context justifying the association of that octopus plushy with antisemitism, it’s simply a bloody toy dammit.
UraniumBlazer@lemm.ee 4 months ago
Nazis are all in on vegetarianism.
This is totally not an attempt to make a bad faith argument against vegetarianism btw.
Coreidan@lemmy.world 4 months ago
So are non neo-nazis.
SplashJackson@lemmy.ca 4 months ago
Just another coffin in the nail of the internet, something that could have been so wonderful, a proto-hive mind full of human knowledge and creativity, and now it’s turning to shite
UltraGiGaGigantic@lemm.ee 4 months ago
Solidarity amongst the working class is not profitable to the 1%.
best_username_ever@sh.itjust.works 4 months ago
A strange source has found a few shitty generated memes. That’s not journalism at all.
spyd3r@sh.itjust.works 4 months ago
I’d be more worried about finding which foreign governments and or intelligence agencies are using these extremist groups as proxies to sow dissent and division in the west, and cutting them off.
zecg@lemmy.world 4 months ago
Go fuck yourself Wired. This used to be a cool magazine written by people in the know, now it’s Murdoch-grade fearmongering.
crawancon@lemm.ee 4 months ago
Pepperidge Farm remembers the early nineties
hal_5700X@sh.itjust.works 4 months ago
Everyone is using AI to spread misinformation. But journalists are mainly focus on Right Wingers using AI to spread misinformation. 🤔
ricdeh@lemmy.world 4 months ago
Maybe because that is more dangerous than any other use?
femtech@midwest.social 3 months ago
Odd that it said Nazis and you said right wringer. I’m glad you see they are the same though.
YourPrivatHater@ani.social 4 months ago
I mean they just do what Islamic terrorists did from the first second onwards. Kinda obvious.
Tregetour@lemdro.id 4 months ago
I’m happy with outgroup x being able to develop their own AIs, because that means I’m able to develop AIs too.
KingThrillgore@lemmy.ml 4 months ago
Because nobody will put up with their crap they have to talk to autocorrect
schnurrito@discuss.tchncs.de 4 months ago
WEER OLL GUNNA DYE
UltraGiGaGigantic@lemm.ee 4 months ago
Promise?
Emperor@feddit.uk 4 months ago
Given thr way AI is prone to hallucinations, they should definitely have a go at building them. Might solve our problems for us.
lets_get_off_lemmy@reddthat.com 4 months ago
Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won’t just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don’t involve training a model that’s an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn’t follow a Medium article, let alone do ground breaking research.
But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!