Thank you, Lemmy. I can’t find this on the technology subreddit. Reddit Is complacent.
[deleted]
Submitted 2 months ago by ForgottenFlux@lemmy.world to technology@lemmy.world
Comments
MunkysUnkEnz0@lemmy.world 2 months ago
PolarKraken@sh.itjust.works 2 months ago
You say complacent, I say complicit.
lemmy_acct_id_8647@lemmy.world 2 months ago
fed0sine@lemm.ee 2 months ago
I thought I read it as complicit the first time.
mPony@lemmy.world 2 months ago
You say pervasive, and I say permissive [soft-shoe dance]
SaharaMaleikuhm@feddit.org 2 months ago
reddit.com/…/microsoft_employee_disrupts_50th_ann… Not that hard to find. Nice spiel about reddit though. Very convincing.
Zeddex@sh.itjust.works 2 months ago
Just wait. It’s only a matter of time before the post gets deleted and the account gets banned. If you think reddit isn’t censoring posts you’re just not paying attention.
Siegfried@lemmy.world 2 months ago
Well, he said most…
Anyway, i was surprised to find out that the main news sub, www.reddit.com/r/anime_titties/ , does not censor those things
ugtug@lemmy.world 2 months ago
Most Reddit subs block comments or ban you outright if you mention Palestine or Israel. Reddit has taken censorship too far.
IndustryStandard@lemmy.world 2 months ago
Speaking about using AI for genocide is “too political”. Please keep the posts limited to Donald Trump golfing and tech leaders donating to Trump. And the US government using AI to track people (unless it is anti genocide protesters, that is political).
Totally non political subjects.
Knock_Knock_Lemmy_In@lemmy.world 2 months ago
Just posting on r/showerthoughts is hard enough.
MunkysUnkEnz0@lemmy.world 2 months ago
Glad you could find it.
7112@lemmy.world 2 months ago
Brave as fuck. It’s a call for all of us to do a little more. What’s happening in our world shouldn’t be the norm anymore.
Antonbl@lemmy.world 2 months ago
This is such a bold move. Takes courage to speak up from the inside, especially at an event like that. Whether people agree or not, the conversation about the ethics of AI and its real-world consequences needs to happen.
TimeNaan@lemmy.world 2 months ago
Braver than the troops
PetroGuy@lemmy.ca 2 months ago
that woman has more balls than half the fucking country. I sure wish she won’t get fired or worse for telling the truth of what’s happening.
OsrsNeedsF2P@lemmy.ml 2 months ago
LinkedIn just deleted her profile, I was following her yesterday: www.linkedin.com/in/ibtihalaboussad
burak@lemmy.ml 2 months ago
It’s crazy how democracy is now implicitly democracy incorporated™ where individuals are silenced for expressing opinions about corporations. Vital institutions for a functioning democracy like media is now owned by big corporations, worse yet, in an increasingly monopolized way, blurring the lines between unelected corporations and elected government (who are also bought by corporations after or before they are elected)
SoftestSapphic@lemmy.world 2 months ago
Nothing will meaningfully improve until the rich fear for their lives
buddascrayon@lemmy.world 2 months ago
Goes to prove that LinkedIn is just exactly like every other social media platform despite what many people seem to believe.
Nexz@feddit.nl 2 months ago
Also, Microsoft owns LinkedIn so that might have something to do with it.
tarius@lemmy.ml 2 months ago
Makes sense its owned by MS
phoenixz@lemmy.ca 2 months ago
LinkedIn as in the website owned by Microsoft? You don’t say!
And Microsoft will now just make sure she vant find a job anywhere? You don’t say
GreyAlien@lemm.ee 2 months ago
Probably the mass reporting coupled with the usual rape and death threats from the deranged zionists.
anachrohack@lemmy.world 2 months ago
You’re sure she didn’t delete it herself due to the attention she was probably getting?
Bazoogle@lemmy.world 2 months ago
Of course their not sure. This is an internet forum. The account being gone must mean it was deleted. Honestly, she was probably getting so much bat shit crazy messages, she deleted it herself. That makes the most sense.
StopTouchingYourPhone@lemmy.world 2 months ago
Could be she set it to private. It’s what I did when I left my last gig.
ivanafterall@lemmy.world 2 months ago
Wow.
merc@sh.itjust.works 2 months ago
Or she set it to private because she was overwhelmed with the messages she was getting.
DNS@discuss.online 2 months ago
I applaud her for fighting against the ocean tide. Our clothes are made from slave/child labor as well as most of our electronics. Expecting fellow Microsoft employees who are very comfortable in their position to give that up because of AI killing people?
I absolutely hate how apathetic the world is. It should never be “that’s the way it is” because we, humans, let it get this far. Are corporations to be blamed, consumers, or both?
Who’s turn is it to pass the hot potato?
wizardbeard@lemmy.dbzer0.com 2 months ago
For AI? Corporations. No question.
Microsoft has to bundle its consumer AI products in with other licenses in order to achieve their frankly abysmal adoption rates. They also have recently significantly reduceded plans for further spending on datacenters, indicating that they might already see the writing on the wall.
OpenAI, the most successful AI “corp”, hemorages money to a mind boggling degree and actually loses money per query.
It’s not profitable or sustainable. The market forces, if left alone, would not provide enough demand to cover the astronimical costs. Companies with more money than god like Microsoft and now Softbank (although they are having to take out massive loans now) are burning astronomical piles of cash to prop it up.
merdaverse@lemmy.world 2 months ago
It takes massive courage to give up a cozy job at Microsoft and potentially damage your entire career to stand up for your values this way. Props to her!
SpiceDealer@lemmy.dbzer0.com 2 months ago
The bravery of this women to speak up for injustice! No doubt she is going to “face consequences” for this disruption. Let’s wish her the best.
Salam Alaykum Ibtiahl Aboussad!
Jhuskindle@lemmy.world 2 months ago
She is SO BRAVE. Wow. I wish I had half that strength.
BigMacHole@lemm.ee 2 months ago
SHE needs to be DEPORTED to EL SALVADOR!
-Free Speech Loving CONservatives!
Serinus@lemmy.world 2 months ago
I know it’s supposed to be a joke. Doesn’t seem worth it.
mPony@lemmy.world 2 months ago
yeah I agree. I do my best to avoid hearing these words when they come from loathsome people. Parroting them, even in jest, still means those ideas are being amplified.
LucidLyes@lemmy.world 2 months ago
Actually in French we label dumbasses as “Les cons”. It doesn’t have the meaning of being sneaky or cunning, it just means they’re morons. Pretty fitting
CheeseToastie@lazysoci.al 2 months ago
Can anyone ELI5 how they’re using AI for genocide? I have awful IT skills so I don’t understand AI
GreyAlien@lemm.ee 2 months ago
In long.
In short:
The AI system labeled tens of thousands of Gazans, mostly men, as suspected militants, with a 10% error rate, meaning thousands were likely civilians.
Human officers spent ~20 seconds per target, often just confirming gender, before approving airstrikes.
“Where’s Daddy?”: A companion AI tracked targets to their homes, prioritizing bombings at night when families were present.
The military authorized 15–20 civilian deaths per low-ranking militant and 100+ for senior Hamas officials
Strikes frequently used unguided munitions, maximizing destruction and civilian harm
Officers admitted acting as “stamps” for AI decisions, with one calling the process “hunting at large”
Additional informations: Project Nimbus
Gormadt@lemmy.blahaj.zone 2 months ago
Holy fuck!
OmegaLemmy@discuss.online 2 months ago
this is comedically evil I don’t even know what to say
stormdahl@lemmy.world 2 months ago
That is seriously dystopian. Wow, what the fuck…
CheeseToastie@lazysoci.al 2 months ago
When families were present??? That’s indefensible. It’s not just the children in the home, it’s all the neighbours kids, visitors and passers by. It’s inhumane.
Knock_Knock_Lemmy_In@lemmy.world 2 months ago
Google and Amazon are mentioned in the Wikipedia article but not Microsoft.
Not defending them, just asking for better evidence.
tauren@lemm.ee 2 months ago
From the article:
The Israeli military uses Microsoft Azure to compile information gathered through mass surveillance, which it transcribes and translates, including phone calls, texts and audio messages, according to an Israeli intelligence officer who works with the systems.
From my understanding, they use AI to automate the processing of text, audio, and video data collected by the intelligence services.
Krompus@lemmy.world 2 months ago
Realitaetsverlust@lemmy.zip 2 months ago
AI is being pushed into war machines big time. America and China are both working on it. With ukraine showing how incredibly effective drones are in warfare, just imagine the damage and destruction a swarm of drones controlled by an AI could cause.
CheeseToastie@lazysoci.al 2 months ago
Its scary and awful
stormdahl@lemmy.world 2 months ago
Hey, I’ve seen that episode of Black Mirror!
TheFriar@lemm.ee 2 months ago
ganbramor@lemmy.world 2 months ago
But critics warn the [AI] system is unproven at best — and at worst, providing a technological justification for the killing of thousands of Palestinian civilians.
This 2023 article didn’t age well.
CheeseToastie@lazysoci.al 2 months ago
Fucking hell that’s bad
DrDeadCrash@programming.dev 2 months ago
Here’s her words on it
When I moved to AI Platform, I was excited to contribute to cutting-edge AI technology and its applications for the good of humanity: accessibility products, translation services, and tools to “empower every human and organization to achieve more.” I was not informed that Microsoft would sell my work to the Israeli military and government, with the purpose of spying on and murdering journalists, doctors, aid workers, and entire civilian families. If I knew my work on transcription scenarios would help spy on and transcribe phone calls to better target Palestinians (source), I would not have joined this organization and contributed to genocide. I did not sign up to write code that violates human rights.
supersquirrel@sopuli.xyz 2 months ago
Stop believing in the veneer of smartness and superiority around these genocidal fuckers.
There is no NON ELI5 explanation here of how they’re using AI for genocide, because the truth is horrible, stupid and brutal.
They are using AI because it is the best tool bullshitters have currently to offload blame for things they chose to do onto obscure abstract entities like corporations, AI decisions and other bullshit.
There is nothing more to it than that, I promise you, it is all just layers of bullshit that is attempting to obscure culpability for participating in a genocide, and honestly it is the perfect technology for that.
j0ester@lemmy.world 2 months ago
There are also many professors doing smart drones. I was in one conference and they were showing off drones flying by themselves and showed the difference between different weather and everything.
ivanafterall@lemmy.world 2 months ago
Ballsy move. I hope she’s okay.
BedSharkPal@lemmy.ca 2 months ago
Is there a go fund me for her?
ComfortablyDumb@awful.systems 2 months ago
She wont get a chance to open one. The gestapo is going to be in her house to deport her in a bit.
alecbowles@lemm.ee 2 months ago
What a hero! I hope she is alright.
GreyAlien@lemm.ee 2 months ago
She doubled down on video, showing no fear of consequences. Hero is unquestionably the right word.
BossDj@lemm.ee 2 months ago
“I hear your protest, thank you.” He was trained with words of acknowledgement. Such useless words.
zarkanian@sh.itjust.works 2 months ago
“I’ve heard it…and now I’m ignoring it.”
StopTouchingYourPhone@lemmy.world 2 months ago
Reading some comments here, I want to leave a gentle reminder to my fellow redditfugees: the block user option is your friend. Curate your feed or get fed.
When you see an aggressively oppositional account dropping shittastic hot takes, of course you can always engage and Have The Conversation if you want. You know what happens after you reply: the person likely leaves a bot to mess with your good intentions, raise your blood pressure, make you depressed and waste your time. Or maybe you successfully Prove Them Wrong and they change the goalposts, or wander off to needle someone else.
We know by now, the more we engage, the more online space they get to fill with accelerationist Content.
So just click the account name, then click the block button, and you’ll never see their viral brainrot again. Nobody needs to know; no need to announce it. If your freezepeach philosophy prevents that, maybe just upvote one of the replies you agree with and move on. If you’re on mobile, you can tag the account through Voyager etc instead of blocking, if you prefer.
However you manage it, removing doomscroller ragebait from your Feed is worth doing.
spooky2092@lemmy.blahaj.zone 2 months ago
Copying their post over (with minimal formatting, unfortunately) for anyone that doesn’t care to go to that site (and to make sure it doesn’t randomly disappear)
r/self 5 mo. ago walkandtalkk You’re being targeted by disinformation networks that are vastly more effective than you realize. And they’re making you more hateful and depressed.
(I wrote this post in March and posted it on r/GenZ. However, a few people messaged me to say that the r/GenZ moderators took it down last week, though I’m not sure why. Given the flood of divisive, gender-war posts we’ve seen in the past five days, and several countries’ demonstrated use of gender-war propaganda to fuel political division in multiple countries, I felt it was important to repost this. This post was written for a U.S. audience, but the implications are increasingly global.)
TL;DR: You know that Russia and other governments try to manipulate people online. But you almost certainly don’t how just how effectively orchestrated influence networks are using social media platforms to make you – individually-- angry, depressed, and hateful toward each other. Those networks’ goal is simple: to cause Americans and other Westerners – especially young ones – to give up on social cohesion and to give up on learning the truth, so that Western countries lack the will to stand up to authoritarians and extremists.
And you probably don’t realize how well it’s working on you.
This is a long post, but I wrote it because this problem is real, and it’s much scarier than you think.
How Russian networks fuel racial and gender wars to make Americans fight one another
In September 2018, a video went viral after being posted by In the Now, a social media news channel. It featured a feminist activist pouring bleach on a male subway passenger for manspreading. It got instant attention, with millions of views and wide social media outrage. Reddit users wrote that it had turned them against feminism.
There was one problem: The video was staged. And In the Now, which publicized it, is a subsidiary of RT, formerly Russia Today, the Kremlin TV channel aimed at foreign, English-speaking audiences.
As an MIT study found in 2019, Russia’s online influence networks reached 140 million Americans every month – the majority of U.S. social media users.
Russia began using troll farms a decade ago to incite gender and racial divisions in the United States
In 2013, Yevgeny Prigozhin, a confidante of Vladimir Putin, founded the Internet Research Agency (the IRA) in St. Petersburg. It was the Russian government’s first coordinated facility to disrupt U.S. society and politics through social media.
Here’s what Prigozhin had to say about the IRA’s efforts to disrupt the 2022 election:
>Gentlemen, we interfered, we interfere and we will interfere. Carefully, precisely, surgically and in our own way, as we know how. During our pinpoint operations, we will remove both kidneys and the liver at once.
In 2014, the IRA and other Russian networks began establishing fake U.S. activist groups on social media. By 2015, hundreds of English-speaking young Russians worked at the IRA. Their assignment was to use those false social-media accounts, especially on Facebook and Twitter – but also on Reddit, Tumblr, 9gag, and other platforms – to aggressively spread conspiracy theories and mocking, ad hominem arguments that incite American users.
In 2017, U.S. intelligence found that Blacktivist, a Facebook and Twitter group with more followers than the official Black Lives Matter movement, was operated by Russia. Blacktivist regularly attacked America as racist and urged black users to rejected major candidates. On November 2, 2016, just before the 2016 election, Blacktivist’s Twitter urged Black Americans: “Choose peace and vote for Jill Stein. Trust me, it’s not a wasted vote.”
Russia plays both sides – on gender, race, and religion
The brilliance of the Russian influence campaign is that it convinces Americans to attack each other, worsening both misandry and misogyny, mutual racial hatred, and extreme antisemitism and Islamophobia. In short, it’s not just an effort to boost the right wing; it’s an effort to radicalize everybody.
Russia uses its trolling networks to aggressively attack men. According to MIT, in 2019, the most popular Black-oriented Facebook page was the charmingly named “My Baby Daddy Aint Shit.” It regularly posts memes attacking Black men and government welfare workers. It serves two purposes: Make poor black women hate men, and goad black men into flame wars.
MIT found that My Baby Daddy is run by a large troll network in Eastern Europe likely financed by Russia.
But Russian influence networks are also also aggressively misogynistic and aggressively anti-LGBT.
On January 23, 2017, just after the first Women’s March, the New York Times found that the Internet Research Agency began a coordinated attack on the movement. Per the Times:
>More than 4,000 miles away, organizations linked to the Russian government had assigned teams to the Women’s March. At desks in bland offices in St. Petersburg, using models derived from advertising and public relations, copywriters were testing out social media messages critical of the Women’s March movement, adopting the personas of fictional Americans. >They posted as Black women critical of white feminism, conservative women who felt excluded, and men who mocked participants as hairy-legged whiners.
But the Russian PR teams realized that one attack worked better than the rest: They accused its co-founder, Arab American Linda Sarsour, of being an antisemite. Over the next 18 months, at least 152 Russian accounts regularly attacked Sarsour. That may not seem like many accounts, but it worked: They drove the Women’s March movement into disarray and eventually crippled the organization.
Russia doesn’t need a million accounts, or even that many likes or upvotes. It just needs to get enough attention that actual Western users begin amplifying its content.
A former federal prosecutor who investigated the Russian disinformation effort summarized it like this:
>It wasn’t exclusively about Trump and Clinton anymore. It was deeper and more sinister and more diffuse in its focus on exploiting divisions within society on any number of different levels.
As the New York Times reported in 2022,
>There was a routine: Arriving for a shift, [Russian disinformation] workers would scan news outlets on the ideological fringes, far left and far right, mining for extreme content that they could publish and amplify on the platforms, feeding extreme views into mainstream conversations.
spooky2092@lemmy.blahaj.zone 2 months ago
(continued)
China is joining in with AI
Last month, the New York Times reported on a new disinformation campaign. “Spamouflage” is an effort by China to divide Americans by combining AI with real images of the United States to exacerbate political and social tensions in the U.S. The goal appears to be to cause Americans to lose hope, by promoting exaggerated stories with fabricated photos about homeless violence and the risk of civil war.
As Ladislav Bittman, a former Czechoslovakian secret police operative, explained about Soviet disinformation, the strategy is not to invent something totally fake. Rather, it is to act like an evil doctor who expertly diagnoses the patient’s vulnerabilities and exploits them, “prolongs his illness and speeds him to an early grave instead of curing him.”
The influence networks are vastly more effective than platforms admit
Russia now runs its most sophisticated online influence efforts through a network called Fabrika. Fabrika’s operators have bragged that social media platforms catch only 1% of their fake accounts across YouTube, Twitter, TikTok, and Telegram, and other platforms.
But how effective are these efforts? By 2020, Facebook’s most popular pages for Christian and Black American content were run by Eastern European troll farms tied to the Kremlin. And Russia doesn’t just target angry Boomers on Facebook. Russian trolls are enormously active on Twitter. And, even, on Reddit.
It’s not just false facts
The term “disinformation” undersells the problem. Because much of Russia’s social media activity is not trying to spread fake news. Instead, the goal is to divide and conquer by making Western audiences depressed and extreme.
Sometimes, through brigading and trolling. Other times, by posting hyper-negative or extremist posts or opinions about the U.S. the West over and over, until readers assume that’s how most people feel. And sometimes, by using trolls to disrupt threads that advance Western unity.
As the RAND think tank explained, the Russian strategy is volume and repetition, from numerous accounts, to overwhelm real social media users and create the appearance that everyone disagrees with, or even hates, them. And it’s not just low-quality bots. Per RAND,
Russian propaganda is produced in incredibly large volumes and is broadcast or otherwise distributed via a large number of channels. … According to a former paid Russian Internet troll, the trolls are on duty 24 hours a day, in 12-hour shifts, and each has a daily quota of 135 posted comments of at least 200 characters.
What this means for you
You are being targeted by a sophisticated PR campaign meant to make you more resentful, bitter, and depressed. It’s not just disinformation; it’s also real-life human writers and advanced bot networks working hard to shift the conversation to the most negative and divisive topics and opinions.
It’s why some topics seem to go from non-issues to constant controversy and discussion, with no clear reason, across social media platforms. And a lot of those trolls are actual, “professional” writers whose job is to sound real.
So what can you do? To quote WarGames: The only winning move is not to play. The reality is that you cannot distinguish disinformation accounts from real social media users. Unless you know whom you’re talking to, there is a genuine chance that the post, tweet, or comment you are reading is an attempt to manipulate you – politically or emotionally.
Here are some thoughts:
-
Don’t accept facts from social media accounts you don’t know. Russian, Chinese, and other manipulation efforts are not uniform. Some will make deranged claims, but others will tell half-truths. Or they’ll spin facts about a complicated subject, be it the war in Ukraine or loneliness in young men, to give you a warped view of reality and spread division in the West.
-
Resist groupthink. A key element of manipulate networks is volume. People are naturally inclined to believe statements that have broad support. When a post gets 5,000 upvotes, it’s easy to think the crowd is right. But “the crowd” could be fake accounts, and even if they’re not, the brilliance of government manipulation campaigns is that they say things people are already predisposed to think. They’ll tell conservative audiences something misleading about a Democrat, or make up a lie about Republicans that catches fire on a liberal server or subreddit.
-
Don’t let social media warp your view of society. This is harder than it seems, but you need to accept that the facts – and the opinions – you see across social media are not reliable. If you want the news, do what everyone online says not to: look at serious, mainstream media. It is not always right. Sometimes, it screws up. But social media narratives are heavily manipulated by networks whose job is to ensure you are deceived, angry, and divided.
-
slowmotionrunner@lemmy.sdf.org 2 months ago
TL;DR: she works on the speech to text AI product on Azure, which is used by Israel for some of their operations.
Look, I understand her desire to stop fighting in Palestine, but by that logic, we should also be protesting every software and computer manufacturer.
PolydoreSmith@lemmy.world 2 months ago
By that logic, if it’s not practical to protest every single injustice in the world, we just shouldn’t bother.
I’d say genocide is a good place to start, wouldn’t you?
cheribbit@lemmy.world 2 months ago
GOOD FOR THEM. I AM SO PROUD OF THEM
MedicPigBabySaver@lemmy.world 2 months ago
Bravo! Best of luck to her.
Nadia786@feddit.uk 2 months ago
Wow, this took guts. Ibtihal Aboussad calling out Mustafa Suleyman during Microsoft’s big 50th anniversary bash shows how deep the unease runs about AI’s military applications. Her point about Microsoft’s tech being used in conflicts—especially with the Israeli military—raises legit ethical questions. It’s wild to think a celebration of innovation got hijacked by a protest over ‘genocide’ and ‘war profiteering.’ What do you all think—should employees have a say in how their work gets used, or is this just grandstanding?
Doomsider@lemmy.world 2 months ago
Microsoft will sell US citizens out in a second when the government tells them to. They will use their AI to round us up without batting an eye. They can not be trusted anymore.
Microsoft is now a threat to democracy and human existence. They are already working against us with governments. This is the tipping point that no one will hear about.
There is too much at stake and they are too big to fail. The government will viciously take down anyone spreading the truth. Continuing to support Microsoft is now a death sentence to democracy.
ComfortablyDumb@awful.systems 2 months ago
DHS is going to have another person deported to El Salvador. I salute thy bravery but Americans are numb to everything.
UltraBlack@lemmy.world 2 months ago
This is what happens when you read and watch hamas’ war propaganda online.
There are photos and videos of russian soldiers dying. Does that give ukraine the fault for this war?
doodledup@lemmy.world 2 months ago
I’m very confused. Can somebody explain how any of that makes any sense?
L3s@lemmy.world 2 months ago
!lemmysilver
DontMakeMoreBabies@lemm.ee 2 months ago
That’s a really long email.
don@lemm.ee 2 months ago
The email the employee sent, apologies for any formatting errors:
dojan@lemmy.world 2 months ago
Thank you. Tried visiting the archive page but it blocked me and ironically enough wanted me to solve a captcha that’ll be used to train “AI.”
Zacryon@feddit.org 2 months ago
Thank you.
52fighters@lemmy.sdf.org 2 months ago
What’s the solution to lasting peace? Restore rule to the Ottomans or British?
SmackemWittadic@lemmy.world 2 months ago
Maybe give the decision to the people who resided in this land before the Ottomans or British put the intentionally divisive borders that they’d come up with?
Arcane2077@sh.itjust.works 2 months ago
You can’t bomb a place or rape its people into stability. We have 200 years of hard evidence for that (not that it was needed). Literally any other approach would be beneficial for everyone except the war profiteers
LarmyOfLone@lemm.ee 2 months ago
You need peace / security, a modicum of prosperity, and education / information from independent news.
Basically both Israel and Palestine are fucked.
Resonosity@lemmy.dbzer0.com 2 months ago
One State. Simple
denialisposdtected@lemmy.cafe 2 months ago
Step 1: stop carpet bombing civilians