- cross-posted to:
- hackernews@lemmy.smeargle.fans
- hackernews@derp.foo
- cross-posted to:
- hackernews@lemmy.smeargle.fans
- hackernews@derp.foo
US immigration enforcement used an AI-powered tool to scan social media posts “derogatory” to the US | “The government should not be using algorithms to scrutinize our social media posts”::undefined
At the same time, whenever there is a mass shooting where the killer posted their intent online, people always say “why weren’t the authorities paying attention”.
The problem is false positive and negative rates.
We’re on track for some 600-700 mass shooters this year.
The US has 300 million social media users.
So in a given year, 0.00023% of social media users will turn out to be mass shooters.
So even if we had an algorithm that was 99.99% accurate at identifying a potential mass shooter from social media, we’d still have a less than 1% chance of correctly identifying a mass shooter from social media posts.
So what’s the cost of false positives? Do people flagged by such a system get harassed by law enforcement? If they are sovereign citizen type gun nuts or paranoid schizophrenics, does the additional law enforcement attention potentially instigate shootings or standoffs that wouldn’t have otherwise occurred at a higher rate than the successful prevention of mass shootings?
And what’s the false negative rate? Because if only a small number of mass shooters are correctly identified by such an algorithm at a high rate of false positives but a majority of shooters actually slip through the cracks as false negatives, there too is the potential for overreliance on an algorithm to harm progress towards alternative solutions (such as advancing legislation banning firearm possession for people with mental health issues).
AI analysis of social media combined with other data sources becomes a more appropriate tool in a situation like “we have three suspects based on multiple other factors for who is an active shooter - did any of the three have a recent stressor in their life such as a job loss?” In that case an 80% correct model could be quite helpful.
I kind of feel that trawling social media looking for the words of potential mass shooters isn’t going to be the thing that solves - or even slows down - the mass shooting problem that the USA has.
I think there is a huge difference between just scanning publicly available text posted to social media in general rather than immigration focus. A lot of these shooters post very public manifestoesque type comments, friends and families have even called the police in some cases and they take no action. It feels like the police actively ignore this stuff just to be able to shrug and protect 2a.
A number of these could have easily been stopped.
The real question is how many people post shit like that but then don’t go on to hurt anyone.
I’m going to be a little glib here : Just fix this part and you won’t need to scan social media posts.
Also, once this is in place you’ll find that the majority of perpetrators - the ones who plan things out - won’t post super incriminating things beforehand and their generally-disturbed posts will be lost in the sea of general discontent flagged by an algorithm trying to sift the wheat from the chaff.