A Proposal for Dealing with Terrorist Videos on the Internet

(Original posting date: 21 December 2015)

As part of the ongoing attempts by politicians around the world to falsely demonize the Internet as a fundamental cause of (or at least a willing partner in) the spread of radical terrorist ideologies, arguments have tended to focus along two parallel tracks.

First is the notorious “We have to do something about evil encryption!” track. This is the dangerously loony “backdoors into encryption for law enforcement and intelligence agencies” argument, which would result in the bad guys having unbreakable crypto, while honest citizens would have their financial and other data made vastly more vulnerable to attacks by black hat hackers as never before. That this argument is made by governments that have repeatedly proven themselves incapable of protecting citizens’ data in government databases makes this line of “reasoning” all the more laughable. More on this at:

Why Governments Lie About Encryption Backdoors

The other track in play relates to an area where there is much more room for reasoned discussion — the presence on the Net of vast numbers of terrorist-related videos, particularly the ones that directly promote violent attacks and other criminal acts.

Make no mistake about it, there are no “magic wand” solutions to be found for this problem, but perhaps we can move the ball in a positive direction with some serious effort.

Both policy and technical issues must be in focus.

In the policy realm, all legitimate Web firms already have Terms of Service (ToS) of some sort, most of which (in one way or another) already prohibit videos that directly attempt to incite violent attacks or display actual acts such as beheadings (and, for example, violence to people and animals in non-terrorism contexts). How to more effectively enforce these terms I’ll get to in a moment.

When we move beyond such directly violent videos, the analysis becomes more difficult, because we may be looking at videos that discuss a range of philosophical aspects of radicalism (both international and/or domestic in nature, and sometimes related to hate groups that are not explicitly religious). Often these videos do not make the kinds of direct, explicit calls to violence that we see in that other category of videos discussed just above.

Politicians tend to promote the broadest possible censorship laws that they can get away with, and so censorship tends to be a slippery slope that starts off narrowly and rapidly expands to other than the originally targeted types of speech.

We must also keep in mind that censorship per se is solely a government power — they’re the ones with the prison cells and shackles to seriously enforce their edicts. The Terms of Service rules promulgated by Web services are independent editorial judgments regarding what they do or don’t wish to host on their facilities.

My view is that it would be a lost cause, and potentially a dangerous infringement on basic speech and civil rights, to attempt the eradication from the Net of videos in the second category I noted — the ones basically promoting a point of view without explicitly promoting or displaying violent acts. It would be all too easy for such attempts to morph into broader, inappropriate controls on speech. And frankly, it’s very important that we be able to see these videos so that we can analyze and prepare for the philosophies being so promoted.

The correct way to fight this class of videos is with our own information, of course. We should be actively explaining why (for example) ISIL/ISIS/IS/Islamic State/Daesh philosophies are the horrific lies of a monstrous death cult.

Yes, we should be doing this effectively and successfully. And we could, if we put sufficient resources and talent behind such information efforts. Unfortunately, Western governments in particular have shown themselves to be utterly inept in this department to date.

Have you seen any of the current ISIL recruitment videos? They’re colorful, fast-paced, energetic, and incredibly professional. Absolutely state of the art 21st century propaganda aimed at young people.

By contrast, Western videos that attempt to push back against these groups seem more on the level of the boring health education slide shows we were shown in class back when I was in elementary school.

Small wonder that we’re losing this information war. This is something we can fix right now, if we truly want to.

As for that other category of videos — the directly violent and violence-inciting ones that most of us would agree have no place in the public sphere (whether they involve terrorist assassinations or perverts crushing kittens), the technical issues involved are anything but trivial.

The foundational issue is that immense amounts of video are being uploaded to services like YouTube (and now Facebook and others) at incredible rates that make any kind of human “previewing” of materials before publication entirely impractical, even if there were agreement (which there certainly is not) that such previewing was desirable or appropriate.

Services like Google’s YouTube run a variety of increasingly sophisticated automated systems to scan for various content potentially violating their ToS, but these systems are not magical in nature, and a great deal of material slips through and can stay online for long periods.

A main reason for this is that uploaders attempting to subvert the system — e.g., by uploading movies and TV shows to which they have no rights, but that they hope to monetize anyway — employ a vast range of techniques to try prevent their videos from being detected by YouTube’s systems. Some of these methods render the results looking orders of magnitude worse than an old VHS tape, but the point is that a continuing game of whack-a-mole is inevitable, even with continuing improvements in these systems, especially considering that false positives must be avoided as well.

These facts tend to render nonsensical recent claims by some (mostly nontechnical) observers that it would be “simple” for services like YouTube to automatically block “terrorist” videos, in the manner that various major services currently detect child porn images. One major difference is that those still images are detected via data “fingerprinting” techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds.

So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts?

I believe there are.

First — and this would seem rather elementary — video viewers need to know that they even have a way to report an abusive video. And that mechanism shouldn’t be hidden!

For example, on YouTube currently, there is no obvious “abuse reporting” flag. You need to know to look under the nebulous “More” link, and also realize that the choice under there labeled “Report” includes abuse situations.

User Interface Psychology 101 tells us that if viewers don’t see an abuse reporting choice clearly present when viewing the video, it won’t even occur to many of them that it’s even possible to report an abusive video, so they’re unlikely to go digging around under “More” or anything else to find such a reporting system..

A side effect of my recommendation to make an obvious and clear abuse reporting link visible on the main YouTube play page (and similarly placed for other video services) would be the likelihood of a notable increase in the number of abuse reports, both accurate and not. (I suspect that the volume of reports may have been a key reason that abuse links have been increasingly “hidden” on these services’ interfaces over time).

This is not an inconsequential problem. Significant increases in abuse reports could swamp human teams working to evaluate them and to make the often complicated “gray area” determinations about whether or not a given reported video should stay online. Again, we’re talking about a massive scale of videos.

So there’s also a part two to my proposal.

I suggest that consideration be given to using volunteer or paid, “crowdsourced” populations of Internet users — on a large scale designed to average out variations in cultural attitudes for any given localizations — to act as an initial “filter” for specific classes of abuse reports regarding publicly available videos.

There are all kinds of complicated and rather fascinating details in even designing a system like this that could work properly, fairly, and avoid misuse. But the bottom line would be to help reduce to manageable levels the abuse reports that would typically reach the service provider teams, especially if significantly more reports were being made — and these teams would still be the only individuals who could actually choose to take specific reported videos offline.

Finding sufficient volunteers for such a system — albeit ones with strong stomachs considering what they’ll be viewing — would probably not prove to be particularly difficult. There are lots of folks out there who want to do their parts toward helping with these issues. Nor is it necessarily the case that only volunteers must fill these roles. This is important work, and finding some way to compensate them for their efforts could prove worthwhile for everyone concerned.

This is only a thumbnail sketch of the concept of course. But these are big problems that are going to require significant solutions. I fervently hope we can work on these issues ourselves before politicians and government bureaucrats impose their own “solutions” that will almost certainly do far more harm than good, with resulting likely untold collateral damage as well.

I believe that we can make serious inroads in these areas if we choose to do so.

One thing’s for sure though. If we don’t work to solve these problems ourselves, we’ll be giving governments yet another excuse for the deployment of ever more expansive censorship agendas that will ultimately muzzle us all.

Let’s try keep that nightmare from happening.

All the best to you and yours for the holidays!

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Why Governments Lie About Encryption Backdoors

(Original posting date: 13 December 2015)

Despite a lack of firm evidence to suggest that the terrorist attackers in Paris, in San Bernardino, or at the Planned Parenthood center in Colorado used strong (or perhaps any) encryption to plan their killing sprees, government authorities around the planet — true to the long-standing predictions of myself and others that terrorist attacks would be exploited in this manner — are once again attempting to leverage these horrific events into arguments for requiring “backdoor” government access to the encryption systems that increasingly protect ordinary people everywhere.

This comes despite the virtual unanimity among reputable computer scientists and other encryption experts that such “master keys” to these encryption systems that protect our financial and ever more aspects of our personal lives would be fundamentally weakened by such a government access mechanism, exposing us all to exploits both via mistakes and purposeful abuse, potentially by governments and outside attacks on our data.

It’s difficult — one might say laughable — to take many of these government arguments seriously even in the first place, given the gross incompetence demonstrated by the U.S. government in breaches that exposed millions of citizens’ personal information and vast quantities of NSA secrets — and with similar events occurring around the world at the hands of other governments.

But there are smart people in government too, who fully understand the technical realities of modern strong encryption systems and how backdoors would catastrophically weaken them.

So why do they continue to argue for these backdoor mechanisms, now more loudly than ever?

The answer appears to be that they’re lying to us.

Or if lying seems like too strong a word, we could alternatively say they’re being “incredibly disingenuous” in their arguments.

You don’t need to be a computer scientist to follow the logic of how we reach this unfortunate and frankly disheartening determination regarding governments’ invocation of terrorism as an excuse for demanding crypto backdoors for authorities’ use.

We start with a fundamental fact.

The techniques of strong, uncrackable crypto are well known. The encryption genies have long since left their bottles. They will not return to them, no matter how much governments may plead, cajole, or threaten.

In fact, the first theoretically unbreakable crypto mechanisms reach back at least as far as the 19th century.

But these systems were only as good as the skill and discipline of their operators, and errors in key management and routine usage could create exploitable and crackable weaknesses — as they did in the case of the German-used “Enigma” system during World War II, for example.

The rise of modern computer and communications technologies — desktops, smartphones, and all the rest — have allowed for the “automation” of new, powerful encryption systems in ways that make them quite secure even in the hands of amateurs, and as black hat hacking exploits have subverted the personal data of millions of persons, major Web and other firms have reacted by deploying ever more powerful crypto foundations to help protect these environments that we all depend upon.

Let’s be very, very clear about this. The terrorist groups that governments consistently claim are the most dangerous to us — al-Qaeda, ISIL (aka ISIS, IS, Islamic State, or Daesh), the less talked about but at least equally dangerous domestic white supremacist groups, and others — all have access to strong encryption systems. These apps are not under the control of the Web firms that backdoor proponents attempt to frame as somehow being “enemies” of law enforcement — due to these firms’ enormously justifiable reluctance to fundamentally weaken their systems with backdoors that would expose us all to data hacking attacks.

What’s more — and you can take this to the bank — ISIL, et al. are extraordinarily unlikely to comply with requests from governments to “Please put backdoors into your homegrown strong crypto apps for us? Pretty please with sugar on it?”

Governments know this of course.

So why do they keep insisting publicly that crypto backdoors are critical to protect us from such groups, when they know that isn’t true?

Because they’re lying — er, being disingenuous with us.

They know that the smart, major terrorist groups will never use systems with government-mandated backdoors for their important communications, they’ll continue to use strong systems developed in and/or distributed by countries without such government mandates, or their own strong self-designed apps.

So it seems clear that the real reason for the government push for encryption backdoors is an attempt not to catch the most dangerous terrorists that they’re constantly talking about, but rather a selection of “low-hanging fruit” of various sorts.

Inept would-be low-level terrorists. Drug dealers. Prostitution rings. Free speech advocates and other political dissidents. You know the types.

That is, just about everybody EXCEPT the most dangerous terrorist groups that wouldn’t go near backdoored encryption systems with a ten foot pole, yet are the very groups governments are loudly claiming backdoor systems are required to fight.

Now, there’s certainly a discussion possible over whether or not massively weakening crypto with backdoors is a reasonable tradeoff to try catch some of the various much lower-level categories of offenders. But given the enormous damage done to so many people by attacks on their personal information through weak or improperly implemented encryption systems, including by governments themselves, that seems like an immensely difficult argument to rationally make.

So our logical analysis leads us inevitably to a pair of apparently indisputable facts.

Encryption systems weakened by mandated backdoors would not be effective in fighting the terrorists that governments invoke as their reason for wanting those backdoors in the first place.

And encryption weakened by mandated backdoors would put all of us — the ordinary folks around the planet who increasingly depend upon encrypted data and communications systems to protect the most intimate aspects of our personal lives — at an enormous risk of exposure from data breaches and associated online and even resulting physical attacks, including via exploitation from foreign governments and terrorist groups themselves.

Encryption backdoors are a gleeful win-win for terrorists and a horrific lose-lose for you, me, our families, our friends, and for other law-abiding persons everywhere. Backdoors would result in the worst of the bad guys having strong protections for their data, and the rest of us being hung out to dry.

It’s time to permanently close and lock the door on encryption backdoors, and throw away the key.

No pun intended, of course.

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Google, Hillary, and the Search Conspiracy Kooks

I’ll let you in on a little secret. I have better ways to spend my Saturdays than writing blog posts about nutso conspiracy theories. Seriously, I really do. But the conspiracy fanatics are again on a wacky rampage, this time with the ludicrous claim that Google is purposely manipulating search results to favor Hillary Clinton over racist, misogynist con-man Donald Trump.

Whether you support Hillary, Trump, or the Man in the Moon, the sheer illogic of these new conspiracy claims make a typical Federico Fellini film look staid and sane by comparison.

The fundamental problem with the vast majority of conspiracy theories is that they require the assumed perpetrators to be inept idiots. Because clearly, we’d almost never know about or even suspect conspiracies managed by the smart folks.

Case in point, the current Google/Hillary conspiracy crud.

The conspiracy nuts would have us believe that Google is purposely (and obviously!) manipulating search “autocomplete” results to de-emphasize negative completions regarding Hillary Clinton.

This makes about as much sense as running a foot race on a motorcycle. It would be immediately clear that something was amiss — and what kind of lamebrain conspiracy would that be?

Google has every reason to keep their search results useful and honest, both for purely ethical reasons and since their users can switch to other firms with a single click of the mouse.

But for the sake of the argument, if I were Google and I wanted to manipulate search results in a dastardly, evil way (cue the Darth Vader theme), I’d be trying to hide negative Hillary search results in the main Google search index, not in autocomplete.

And yet if you do a regular Google Search for any negative topics regarding Hillary Clinton — even the nuttiest ones that the haters spew on endlessly about — you’ll get enough pages of results back to keep you in hardcore conspiracy heaven for a lifetime.

So what’s the problem with Google Search autocomplete?

Nothing. Autocomplete is working exactly as it should.

In fact, if I type in “hillary e” I immediately get a list that features the silly “email indictment” stories. If I enter “hillary cr” I get back “crazy” – “crying” – “crooked” – with results pointing at vast numbers of negative, right-wing trash sites.

So why when you simply enter “hillary” don’t all those negative completions appear?

Well, for the same reason that “trump ra” returns autocomplete results like “racism” and “racist” but “trump” alone does not.

If we go back a few years, there were widely publicized complaints and even lawsuits arguing that Google Search autocomplete overemphasized “negative” or somehow “undesirable” information about some searched individuals and other topics– even though those autocomplete results were valid on an algorithmic basis.

And over time, we can see that autocomplete has evolved by returning more “generic” completions until the user’s query becomes a bit more specific.

Whether or not one personally agrees with this mode of operation, the important point is that it doesn’t favor anyone — it behaves the same way for everyone. Hillary. Trump. Even Justin Bieber.

There’s no Google search political favoritism. No conspiracy. Nothing to see here other than honest search results. Move along …

I realize that this is disappointing to Trump fans and to conspiracy aficionados in general.

But hey, there’s always other crazy conspiracy theories to keep you busy. The moon landings. The Illuminati. Yeah, and reptilian lizard people. Hell, even Francis Bacon vs. Shakespeare!

Have at it, gang!

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Why Free Speech Is Even More Important Than Privacy

Supporters of the EU’s horrific “Right To Be Forgotten” (RTBF) generally make the implicit (and sometimes explicit) argument that privacy must take precedence over free speech.

As a privacy advocate for many years (I created my ongoing PRIVACY Forum in 1992) you might expect that I’d have at least some sympathy for that position.

Such an assumption would be incorrect. At least in the context of censorship in general — and of RTBF in particular — I disagree strongly with such assertions.

It’s not because privacy is unimportant. In fact, I feel that free speech is more important than privacy precisely because privacy itself is so important!

It’s all a matter of what you know, what you don’t know, and what you don’t know that you don’t know.

Basically, there are two categories of censorship.

The first consists largely of materials that you know exist, but that you are forbidden by (usually government) edict from accessing. Such items may in practice be difficult to obtain, or simple to obtain, but in either case may carry significant legal penalties if you actually obtain them (or in some cases, even try to obtain them). An obvious example of this category is sexually-explicit materials of various sorts around the world.

Ironically, while this category could encompass everything from classic erotic literature to the most depraved pornography involving children, overall it is the lesser insidious form of censorship, since at least you know that it exists.

The even more evil type of censorship — the sort that is fundamental to the “Right To be Forgotten” concept and an essential element of George Orwell’s “Nineteen Eighty-Four” — is the effort to hide actual information in a manner that would prevent you from even knowing that it exists in the first place.

Whether it’s a war with “Eastasia” or a personal past that someone would prefer that you not know about, the goal is for you not to realize, to not even suspect, that some negative information is out there that you might consider to be relevant and important.

Combine this with the escalating RTBF demands of France and other countries for global censorship powers over Google’s and other firms’ search results, and it becomes clear why privacy itself can be decimated under RTBF and similar forms of censorship.

Because if individual governments — some of whom already impose draconian information controls domestically — gain global censorship powers, we can’t possibly assume that we even know what’s really going on in respect to negative impacts on our privacy!

In other words, RTBF and similar forms of censorship can act to hide from us the very existence of entities, facts and efforts that could be directly damaging to our privacy in a myriad number of ways.  And if we don’t know that these even exist, how can we possibly make informed evaluations of our privacy and the privacy of our loved ones?

To make matters worse, much of this applies not only to privacy issues, but to an array of crucial security issues as well.

Attempting to maintain privacy and security in a regime of global censorship designed to hide facts from the public — irrespective of the occasionally laudable motives for such actions in some specific cases — is like trying to build a skyscraper on a foundation of quicksand.

You don’t need to be an architect, a computer scientist — or a privacy expert — to recognize the insanity of such an approach.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

France’s Guillotining of Global Free Speech Continues

The war between France and Google — with France demanding that Google act as a global censor, and Google appealing France’s edicts — shows no signs of abating, and the casualty list could easily end up including most of this planet’s residents.

As soon as the horrific “Right To Be Forgotten” (RTBF) concept was initially announced by the EU, many observers (including myself) suspected that the “end game” would always be global censorship, despite efforts by Google and others to reach agreements that could limit EU censorship to the EU itself.

This is the heart of the matter. France — and shortly we can be sure a parade of such free speech loathing countries like Russia, China, and many others — is demanding that Google remove search results for third-party materials on a global basis from all Google indexes around the world.

What this means is that even though I’m sitting right here in Los Angeles, if I dare to write a completely accurate and USA-legal post that the French government finds objectionable, France is demanding the right to force Google (and ultimately, other search engines and indexes) to remove key references to my posting from Google and other search results. For everyone. Everywhere. Around the world. Because of … France.

It’s nonsensical on its face but incredibly dangerous. It’s a dream of every dictator and legions of bureaucrats down through history, brought to a shiny 21st century technological reality.

You don’t have to be a computer scientist to realize that if every country in the world has a veto power over global search results, the lightspeed race to the lowest common denominator of sickly search results pablum would make Einstein’s head spin.

Proponents of these censorship regimes play the usual sorts of duplicitous word games of censorship czars throughout history. They claim it’s for the good of all, and that it’s not “really” censorship since “only” search results are involved.

Well here’s something you can take to the bank. Let’s leave aside for the moment the absolute truth that — given the enormous scale of the Web — hiding search results is effectively largely the same as hiding most source content itself as far as most people are concerned. But even if we ignore this fact, the truth of the matter is that it won’t be long before these same governments are also demanding the direct censorship of source material websites as well as search results.

However small the “forbidden information” leakage past the censorship of search results themselves, government censors will never be satisfied. They never are. In the history of civilization, they’ve never been satisfied.

A grand irony of course is that the very rise of Internet technology has been the potential enabler of centrally-mandated censorship to a degree never imagined even twenty years ago. For those of us who’ve spent our professional lives working to build these systems to foster the open spread of information, seeing our technologies turned into the tools of tyrants is disheartening to say the least.

It is however encouraging that firms like Google are continuing to fight the good fight against governments’ censorship regimes. Frankly, it will take firms on the scale of Google — along with support by masses of ordinary folks like us — to have any chance at all of keeping France and other governments around the world from turning the Internet into their own personal information control fiefdoms.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.