US immigration enforcement used an AI-powered tool to scan social media posts “derogatory” to the US | “The government should not be using algorithms to scrutinize our social media posts”::undefined
I for one welcome… blah blah
Yeah this is scary and honestly why I’m super careful what I put online. Well, mostly.
There are professional social media checkers who will find your hidden/locked social media (and Reddit etc).
They get hired by recruitment agencies or companies who are hiring.
And on the surface it could be to check no one is a secret nazi/chauvanist etc
But I bet there’s secondary data about political leanings or how “appropriate” your friends are.
Or if you’re willing to be a part of the old boy’s club (coke and strippers is fine for execs, but you can’t have a nephew who is in a labor union etc).
ChonkyOwlbear@lemmy.world 1 year ago
At the same time, whenever there is a mass shooting where the killer posted their intent online, people always say “why weren’t the authorities paying attention”.
kromem@lemmy.world 1 year ago
The problem is false positive and negative rates.
We’re on track for some 600-700 mass shooters this year.
The US has 300 million social media users.
So I’m a given year, 0.00023% of social media users will turn out to be mass shooters.
So even if we had an algorithm that was 99.99% accurate at identifying a potential mass shooter from social media, we’d still have a less than 1% chance of correctly identifying a mass shooter from social media posts.
So what’s the cost of false positives? Do people flagged by such a system get harassed by law enforcement? If they are sovereign citizen type gun nuts or paranoid schizophrenics, does the additional law enforcement attention potentially instigate shootings or standoffs that wouldn’t have otherwise occurred at a higher rate than the successful prevention of mass shootings?
And what’s the false negative rate? Because if only a small number of mass shooters are correctly identified by such an algorithm at a high rate of false positives but a majority of shooters actually slip through the cracks as false negatives, there too is the potential for overreliance on an algorithm to harm progress towards alternative solutions (such as advancing legislation banning firearm possession for people with mental health issues).
AI analysis of social media combined with other data sources becomes a more appropriate tool in a situation like “we have three suspects based on multiple other factors for who is an active shooter - did any of the three have a recent stressor in their life such as a job loss?” In that case an 80% correct model could be quite helpful.
dgriffith@aussie.zone 1 year ago
I kind of feel that trawling social media looking for the words of potential mass shooters isn’t going to be the thing that solves - or even slows down - the mass shooting problem that the USA has.
Corkyskog@sh.itjust.works 1 year ago
I think there is a huge difference between just scanning publicly available text posted to social media in general rather than immigration focus. A lot of these shooters post very public manifestoesque type comments, friends and families have even called the police in some cases and they take no action. It feels like the police actively ignore this stuff just to be able to shrug and protect 2a.