A Proposal for Dealing with Terrorist Videos on the Internet

(Original posting date: 21 December 2015)

As part of the ongoing attempts by politicians around the world to falsely demonize the Internet as a fundamental cause of (or at least a willing partner in) the spread of radical terrorist ideologies, arguments have tended to focus along two parallel tracks.

First is the notorious “We have to do something about evil encryption!” track. This is the dangerously loony “backdoors into encryption for law enforcement and intelligence agencies” argument, which would result in the bad guys having unbreakable crypto, while honest citizens would have their financial and other data made vastly more vulnerable to attacks by black hat hackers as never before. That this argument is made by governments that have repeatedly proven themselves incapable of protecting citizens’ data in government databases makes this line of “reasoning” all the more laughable. More on this at:

Why Governments Lie About Encryption Backdoors

The other track in play relates to an area where there is much more room for reasoned discussion — the presence on the Net of vast numbers of terrorist-related videos, particularly the ones that directly promote violent attacks and other criminal acts.

Make no mistake about it, there are no “magic wand” solutions to be found for this problem, but perhaps we can move the ball in a positive direction with some serious effort.

Both policy and technical issues must be in focus.

In the policy realm, all legitimate Web firms already have Terms of Service (ToS) of some sort, most of which (in one way or another) already prohibit videos that directly attempt to incite violent attacks or display actual acts such as beheadings (and, for example, violence to people and animals in non-terrorism contexts). How to more effectively enforce these terms I’ll get to in a moment.

When we move beyond such directly violent videos, the analysis becomes more difficult, because we may be looking at videos that discuss a range of philosophical aspects of radicalism (both international and/or domestic in nature, and sometimes related to hate groups that are not explicitly religious). Often these videos do not make the kinds of direct, explicit calls to violence that we see in that other category of videos discussed just above.

Politicians tend to promote the broadest possible censorship laws that they can get away with, and so censorship tends to be a slippery slope that starts off narrowly and rapidly expands to other than the originally targeted types of speech.

We must also keep in mind that censorship per se is solely a government power — they’re the ones with the prison cells and shackles to seriously enforce their edicts. The Terms of Service rules promulgated by Web services are independent editorial judgments regarding what they do or don’t wish to host on their facilities.

My view is that it would be a lost cause, and potentially a dangerous infringement on basic speech and civil rights, to attempt the eradication from the Net of videos in the second category I noted — the ones basically promoting a point of view without explicitly promoting or displaying violent acts. It would be all too easy for such attempts to morph into broader, inappropriate controls on speech. And frankly, it’s very important that we be able to see these videos so that we can analyze and prepare for the philosophies being so promoted.

The correct way to fight this class of videos is with our own information, of course. We should be actively explaining why (for example) ISIL/ISIS/IS/Islamic State/Daesh philosophies are the horrific lies of a monstrous death cult.

Yes, we should be doing this effectively and successfully. And we could, if we put sufficient resources and talent behind such information efforts. Unfortunately, Western governments in particular have shown themselves to be utterly inept in this department to date.

Have you seen any of the current ISIL recruitment videos? They’re colorful, fast-paced, energetic, and incredibly professional. Absolutely state of the art 21st century propaganda aimed at young people.

By contrast, Western videos that attempt to push back against these groups seem more on the level of the boring health education slide shows we were shown in class back when I was in elementary school.

Small wonder that we’re losing this information war. This is something we can fix right now, if we truly want to.

As for that other category of videos — the directly violent and violence-inciting ones that most of us would agree have no place in the public sphere (whether they involve terrorist assassinations or perverts crushing kittens), the technical issues involved are anything but trivial.

The foundational issue is that immense amounts of video are being uploaded to services like YouTube (and now Facebook and others) at incredible rates that make any kind of human “previewing” of materials before publication entirely impractical, even if there were agreement (which there certainly is not) that such previewing was desirable or appropriate.

Services like Google’s YouTube run a variety of increasingly sophisticated automated systems to scan for various content potentially violating their ToS, but these systems are not magical in nature, and a great deal of material slips through and can stay online for long periods.

A main reason for this is that uploaders attempting to subvert the system — e.g., by uploading movies and TV shows to which they have no rights, but that they hope to monetize anyway — employ a vast range of techniques to try prevent their videos from being detected by YouTube’s systems. Some of these methods render the results looking orders of magnitude worse than an old VHS tape, but the point is that a continuing game of whack-a-mole is inevitable, even with continuing improvements in these systems, especially considering that false positives must be avoided as well.

These facts tend to render nonsensical recent claims by some (mostly nontechnical) observers that it would be “simple” for services like YouTube to automatically block “terrorist” videos, in the manner that various major services currently detect child porn images. One major difference is that those still images are detected via data “fingerprinting” techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds.

So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts?

I believe there are.

First — and this would seem rather elementary — video viewers need to know that they even have a way to report an abusive video. And that mechanism shouldn’t be hidden!

For example, on YouTube currently, there is no obvious “abuse reporting” flag. You need to know to look under the nebulous “More” link, and also realize that the choice under there labeled “Report” includes abuse situations.

User Interface Psychology 101 tells us that if viewers don’t see an abuse reporting choice clearly present when viewing the video, it won’t even occur to many of them that it’s even possible to report an abusive video, so they’re unlikely to go digging around under “More” or anything else to find such a reporting system..

A side effect of my recommendation to make an obvious and clear abuse reporting link visible on the main YouTube play page (and similarly placed for other video services) would be the likelihood of a notable increase in the number of abuse reports, both accurate and not. (I suspect that the volume of reports may have been a key reason that abuse links have been increasingly “hidden” on these services’ interfaces over time).

This is not an inconsequential problem. Significant increases in abuse reports could swamp human teams working to evaluate them and to make the often complicated “gray area” determinations about whether or not a given reported video should stay online. Again, we’re talking about a massive scale of videos.

So there’s also a part two to my proposal.

I suggest that consideration be given to using volunteer or paid, “crowdsourced” populations of Internet users — on a large scale designed to average out variations in cultural attitudes for any given localizations — to act as an initial “filter” for specific classes of abuse reports regarding publicly available videos.

There are all kinds of complicated and rather fascinating details in even designing a system like this that could work properly, fairly, and avoid misuse. But the bottom line would be to help reduce to manageable levels the abuse reports that would typically reach the service provider teams, especially if significantly more reports were being made — and these teams would still be the only individuals who could actually choose to take specific reported videos offline.

Finding sufficient volunteers for such a system — albeit ones with strong stomachs considering what they’ll be viewing — would probably not prove to be particularly difficult. There are lots of folks out there who want to do their parts toward helping with these issues. Nor is it necessarily the case that only volunteers must fill these roles. This is important work, and finding some way to compensate them for their efforts could prove worthwhile for everyone concerned.

This is only a thumbnail sketch of the concept of course. But these are big problems that are going to require significant solutions. I fervently hope we can work on these issues ourselves before politicians and government bureaucrats impose their own “solutions” that will almost certainly do far more harm than good, with resulting likely untold collateral damage as well.

I believe that we can make serious inroads in these areas if we choose to do so.

One thing’s for sure though. If we don’t work to solve these problems ourselves, we’ll be giving governments yet another excuse for the deployment of ever more expansive censorship agendas that will ultimately muzzle us all.

Let’s try keep that nightmare from happening.

All the best to you and yours for the holidays!

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Why Governments Lie About Encryption Backdoors
No, I did not have a project where I "released birds" over the ARPANET