Great move for Meta. It’ll let them claim they’re doing something to curb horrid content on the platform without actually doing anything.
Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content
Submitted 1 month ago by Pro@programming.dev to technology@lemmy.world
https://text.npr.org/nx-s1-5407870
Comments
henfredemars@infosec.pub 1 month ago
DeathsEmbrace@lemm.ee 1 month ago
The marketing behind AI must feel like a runners high. “Something has AI”
TransplantedSconie@lemm.ee 1 month ago
Meta:
Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.
AI: becomes SKYNET
fullsquare@awful.systems 1 month ago
moderation on facebook? i’m sure it can be found right next to bigfoot
(other than automated immediate nipple removal)
pelespirit@sh.itjust.works 1 month ago
This might be the one time I’m okay with this. It’s too hard on the humans that did this I hope the AI won’t “learn” to be cruel from this though, and I don’t trust Meta to handle this gracefully.
chrash0@lemmy.world 1 month ago
pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)
masterofn001@lemmy.ca 1 month ago
I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.
But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.
themurphy@lemmy.ml 1 month ago
My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.
Next question is though, do you trust Meta to moderate. Nah, should be an independent AI, they couldnt tinker with to also remove everything they just dont like.
Ulrich@feddit.org 1 month ago
Well hey that actually sounds like a job AI could be good at. Just give it a prompt like “tell me there are no privacy issues because we don’t care” and it’ll do just that!
Treczoks@lemmy.world 1 month ago
Only if the also take the full legal responsibility for the AIs actions.
muusemuuse@lemm.ee 1 month ago
They don’t even take responsibility for things now.
utopiah@lemmy.world 1 month ago
The business model IS dodging any kind of responsibility so… yeah, I think they’ll pass.
philpo@feddit.org 1 month ago
In the other news: Meta pays another 3 billion Euro due to not following the DSA and getting banned in Europe.
melsaskca@lemmy.ca 1 month ago
I think AI is positioned to make better decisions than execs. The money saved would be huge!
mitrosus@discuss.tchncs.de 1 month ago
The money saved goes where?
melsaskca@lemmy.ca 1 month ago
It goes to pay off the debt of all of the nations in the world and will then usher in a new age of peace, obviously.
PattyMcB@lemmy.world 1 month ago
A bold strategy, Cotton
MITM0@lemmy.world 1 month ago
That’s gonna end well😉
homesweethomeMrL@lemmy.world 1 month ago
Oh man, I nay have to stop using this fascist sewer hose.
RizzRustbolt@lemmy.world 1 month ago
Following Tumblr’s lead, I see…
supersquirrel@sopuli.xyz 1 month ago
great idea…!
CosmoNova@lemmy.world 1 month ago
Would be a shame if people at so sift through AI generated gore before the bots like and comment it. But seriously, good on them.
wwb4itcgas@lemm.ee 1 month ago
I’ve never had a horse in this race, and I never will - but I’m sure this will work out well for those who do. /s
AstralPath@lemmy.ca 1 month ago
Honestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
HowAbt2day@futurology.today 1 month ago
Not suitable for Lemmy?
blargle@sh.itjust.works 1 month ago
Not sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…
ouch@lemmy.world 1 month ago
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
beejjorgensen@lemmy.sdf.org 1 month ago
😂😂😂😔
tarknassus@lemmy.world 1 month ago
They will probably use the YouTube model - “you’re wrong and that’s it”.
brorodeo@lemmy.ca 1 month ago
Bsky already does that.
head_socj@midwest.social 1 month ago
Agreed. These jobs are overwhelmingly concentratedin developing nations and pay pathetic wages, too.
towerful@programming.dev 1 month ago
Yup.
It’s a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.
If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).
This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don’t traumatize the moderators.