Could AI Help Prevent Mass Shootings?

Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.

A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.

It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.

“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.

We can relatively easily list some of the factors that would need to be considered in these respects.

What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?

Obviously — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.

It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.

The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.

To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.

–Lauren–

Pressuring Google's AI Advisory Panel to Wear a Halo Is Very Dangerous
A Major New Privacy-Positive Move by Google