I think AI is positioned to make better decisions than execs. The money saved would be huge!
Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content
Submitted 10 months ago by Pro@programming.dev to technology@lemmy.world
https://text.npr.org/nx-s1-5407870
Comments
melsaskca@lemmy.ca 10 months ago
mitrosus@discuss.tchncs.de 10 months ago
The money saved goes where?
melsaskca@lemmy.ca 10 months ago
It goes to pay off the debt of all of the nations in the world and will then usher in a new age of peace, obviously.
MITM0@lemmy.world 10 months ago
That’s gonna end well😉
philpo@feddit.org 10 months ago
In the other news: Meta pays another 3 billion Euro due to not following the DSA and getting banned in Europe.
Ulrich@feddit.org 10 months ago
Well hey that actually sounds like a job AI could be good at. Just give it a prompt like “tell me there are no privacy issues because we don’t care” and it’ll do just that!
wwb4itcgas@lemm.ee 10 months ago
I’ve never had a horse in this race, and I never will - but I’m sure this will work out well for those who do. /s
PattyMcB@lemmy.world 10 months ago
A bold strategy, Cotton
homesweethomeMrL@lemmy.world 10 months ago
Oh man, I nay have to stop using this fascist sewer hose.
fullsquare@awful.systems 10 months ago
moderation on facebook? i’m sure it can be found right next to bigfoot
(other than automated immediate nipple removal)
TransplantedSconie@lemm.ee 10 months ago
Meta:
Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.
AI: becomes SKYNET
RizzRustbolt@lemmy.world 10 months ago
Following Tumblr’s lead, I see…
AstralPath@lemmy.ca 10 months ago
Honestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.
towerful@programming.dev 10 months ago
Yup.
It’s a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don’t traumatize the moderators.
brorodeo@lemmy.ca 10 months ago
Bsky already does that.
head_socj@midwest.social 10 months ago
Agreed. These jobs are overwhelmingly concentratedin developing nations and pay pathetic wages, too.
ouch@lemmy.world 10 months ago
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
tarknassus@lemmy.world 10 months ago
They will probably use the YouTube model - “you’re wrong and that’s it”.
beejjorgensen@lemmy.sdf.org 10 months ago
Or a process to challenge them?
😂😂😂😔
HowAbt2day@futurology.today 10 months ago
Not suitable for Lemmy?
blargle@sh.itjust.works 10 months ago
Not sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…
pelespirit@sh.itjust.works 10 months ago
This might be the one time I’m okay with this. It’s too hard on the humans that did this I hope the AI won’t “learn” to be cruel from this though, and I don’t trust Meta to handle this gracefully.
chrash0@lemmy.world 10 months ago
pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)
masterofn001@lemmy.ca 10 months ago
I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.
But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.
themurphy@lemmy.ml 10 months ago
My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.
Next question is though, do you trust Meta to moderate. Nah, should be an independent AI, they couldnt tinker with to also remove everything they just dont like.
CosmoNova@lemmy.world 10 months ago
Would be a shame if people at so sift through AI generated gore before the bots like and comment it. But seriously, good on them.
henfredemars@infosec.pub 10 months ago
Great move for Meta. It’ll let them claim they’re doing something to curb horrid content on the platform without actually doing anything.
DeathsEmbrace@lemm.ee 10 months ago
The marketing behind AI must feel like a runners high. “Something has AI”
supersquirrel@sopuli.xyz 10 months ago
great idea…!
Treczoks@lemmy.world 10 months ago
Only if the also take the full legal responsibility for the AIs actions.
utopiah@lemmy.world 10 months ago
The business model IS dodging any kind of responsibility so… yeah, I think they’ll pass.
muusemuuse@lemm.ee 10 months ago
They don’t even take responsibility for things now.