June 30, 2015

Terrorism, the Internet, and Google

For those of us involved in the early days of the Internet's creation and growth, it would at the time have seemed inconceivable that decades later the topic of this post would need to be typed. I think it's fair to say that none of us -- certainly not yours truly -- ever imagined that the fruits of our labors would one day become a crucial tool for terrorists.

That day has nonetheless arrived, and it thrusts us directly into what arguably is the single most critical issue facing the Internet and Web today -- what to do about the commandeering of social media by the likes of ISIL (aka ISIS, or IS, or Daesh) and other terrorist groups.

As we've discussed in the past, governments around the world are already using the highly visible Internet presence of these criminal terrorist organizations as excuses to call for broad Internet censorship powers, and for "backdoors" into encryption systems that would be devastating for both privacy and security worldwide.

Yet it's the horrific terrorist "recruitment" videos that have quite understandably received the bulk of public attention, and they create a complex dilemma for advocates of free speech such as myself.

We know that free speech is not without limits -- the "yelling fire in a crowded theater" case being the canonical example.

How and where should we draw the lines on the Web?

Let's begin with a fundamental fact that is all too often ignored or misrepresented. When a firm like Google -- or any other organization outside of government -- decides it does not want to host or encourage any given type of material, this is not censorship.

Just as book publishers are not obligated to distribute every manuscript offered to them, and TV networks need not buy every series pilot that comes their way, nongovernmental organizations and firms are free to determine their own editorial standards and Terms of Service.

They need not participate in the dissemination of sexually-oriented videos, kitten abuse compilations ... or beheading videos produced by medieval, religious fanatic monsters.

Firms are free to determine for themselves the limits of what their content and services will be.

Governments -- on the other hand -- can censor. That is, they determine what private parties, firms, and other organizations are (at least in theory) permitted to produce, disseminate, or hear and view. And governments can back up these censorship orders with both criminal and civil penalties. They can throw you in shackles into a dark cell for violating their orders. Last time I checked, Google and other Internet firms didn't have such capabilities.

So when Google's chief legal officer David Drummond, and policy director Victoria Grand recently spoke of the need to fight back against ISIL and other terrorist groups' propaganda and recruiting use of YouTube in particular, and urged other firms to take similar social media stances, I was very proud of their positions and those of Google's broader policy team.

Even for a vocal free speech advocate such as myself, I cannot ethically condone the use of powerful platforms like YouTube as genocide-promoting social media channels by technologically skilled savages.

This is not to suggest that drawing the lines in such cases is anything but vastly complicated.

I have some significant insight into this thanks to my recent consulting to Google, and I can state unequivocally that the amount of emotionally draining, Solomonic soul-searching judgments that go into decisions regarding abusive content removals at Google is absolutely awe-inspiring. The motivated and dedicated individuals and teams involved deserve our unending respect.

Even seemingly obvious cases -- like those involving ISIL -- turn out to be decidedly difficult when you dig into the details.

Some governments would love to try cleanse the entire Net of all references to these terror groups via broad censorship orders.

That would be doomed to failure of course, and in fact attempts to utterly banish information about the utter brutality of these beasts would not at all serve in making sure the world clearly understands the depth of horror with which we're dealing.

Yet there is vanishingly little true probative value -- and there is vast salacious propagandistic recruitment power -- in the display of actual beheadings conducted by these groups, and Google is correct to ban these as they have.

A particularly disquieting corollary to this situation is the manner in which some of my colleagues seem unwilling or unable to appreciate the complexities and nuances inherent in these situations.

Many of them have expressed anger at Google for drawing these content lines, arguing that YouTube users should be permitted to post whatever they want whenever they want, no matter the content -- even if the videos serve purposely and directly as vile terrorist recruiting instruments.

Such arguments essentially attempt to equate all content and all speech as equal -- an appealing academic concept perhaps, but a devastatingly dangerous construct in the real world of today given the power and reach of modern social media.

To be crystal clear about this, I'll emphasize again that decisions about content availability and removal in these contexts are complex, difficult, and not to be approached cavalierly.

But I'm convinced that Google is doing this right, and the Web at large would do well to look toward Google as an example of best ethical practices in managing this nightmarish situation in the best interests of the global community at large.

--Lauren--

Posted by Lauren at June 30, 2015 02:50 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein