For those of us involved in the early days of the Internet's creation and growth, it would at the time have seemed inconceivable that decades later the topic of this post would need to be typed. I think it's fair to say that none of us -- certainly not yours truly -- ever imagined that the fruits of our labors would one day become a crucial tool for terrorists. That day has nonetheless arrived, and it thrusts us directly into what arguably is the single most critical issue facing the Internet and Web today -- what to do about the commandeering of social media by the likes of ISIL (aka ISIS, or IS, or Daesh) and other terrorist groups. As we've discussed in the past, governments around the world are already using the highly visible Internet presence of these criminal terrorist organizations as excuses to call for broad Internet censorship powers, and for "backdoors" into encryption systems that would be devastating for both privacy and security worldwide. Yet it's the horrific terrorist "recruitment" videos that have quite understandably received the bulk of public attention, and they create a complex dilemma for advocates of free speech such as myself. We know that free speech is not without limits -- the "yelling fire in a crowded theater" case being the canonical example. How and where should we draw the lines on the Web? Let's begin with a fundamental fact that is all too often ignored or misrepresented. When a firm like Google -- or any other organization outside of government -- decides it does not want to host or encourage any given type of material, this is not censorship. Just as book publishers are not obligated to distribute every manuscript offered to them, and TV networks need not buy every series pilot that comes their way, nongovernmental organizations and firms are free to determine their own editorial standards and Terms of Service. They need not participate in the dissemination of sexually-oriented videos, kitten abuse compilations ... or beheading videos produced by medieval, religious fanatic monsters. Firms are free to determine for themselves the limits of what their content and services will be. Governments -- on the other hand -- can censor. That is, they determine what private parties, firms, and other organizations are (at least in theory) permitted to produce, disseminate, or hear and view. And governments can back up these censorship orders with both criminal and civil penalties. They can throw you in shackles into a dark cell for violating their orders. Last time I checked, Google and other Internet firms didn't have such capabilities. So when Google's chief legal officer David Drummond, and policy director Victoria Grand recently spoke of the need to fight back against ISIL and other terrorist groups' propaganda and recruiting use of YouTube in particular, and urged other firms to take similar social media stances, I was very proud of their positions and those of Google's broader policy team. Even for a vocal free speech advocate such as myself, I cannot ethically condone the use of powerful platforms like YouTube as genocide-promoting social media channels by technologically skilled savages. This is not to suggest that drawing the lines in such cases is anything but vastly complicated. I have some significant insight into this thanks to my recent consulting to Google, and I can state unequivocally that the amount of emotionally draining, Solomonic soul-searching judgments that go into decisions regarding abusive content removals at Google is absolutely awe-inspiring. The motivated and dedicated individuals and teams involved deserve our unending respect. Even seemingly obvious cases -- like those involving ISIL -- turn out to be decidedly difficult when you dig into the details. Some governments would love to try cleanse the entire Net of all references to these terror groups via broad censorship orders. That would be doomed to failure of course, and in fact attempts to utterly banish information about the utter brutality of these beasts would not at all serve in making sure the world clearly understands the depth of horror with which we're dealing. Yet there is vanishingly little true probative value -- and there is vast salacious propagandistic recruitment power -- in the display of actual beheadings conducted by these groups, and Google is correct to ban these as they have. A particularly disquieting corollary to this situation is the manner in which some of my colleagues seem unwilling or unable to appreciate the complexities and nuances inherent in these situations. Many of them have expressed anger at Google for drawing these content lines, arguing that YouTube users should be permitted to post whatever they want whenever they want, no matter the content -- even if the videos serve purposely and directly as vile terrorist recruiting instruments. Such arguments essentially attempt to equate all content and all speech as equal -- an appealing academic concept perhaps, but a devastatingly dangerous construct in the real world of today given the power and reach of modern social media. To be crystal clear about this, I'll emphasize again that decisions about content availability and removal in these contexts are complex, difficult, and not to be approached cavalierly. But I'm convinced that Google is doing this right, and the Web at large would do well to look toward Google as an example of best ethical practices in managing this nightmarish situation in the best interests of the global community at large. --Lauren-- |
One of the oft-repeated Big Lies -- still bandied about by Google haters today -- is the false claim that Google enthusiastically turns over user data to government agencies. This fallacy perhaps reached its zenith a few years ago, when misleading PowerPoint slides from Edward Snowden's stolen NSA documents cache were touted by various commercial parties (with whom he had entrusted the data), in a misleading, out-of-context manner, designed for maximum clickbait potential. The slides were publicized by these parties with glaring headlines suggesting that Google permitted NSA to freely rummage around through Google data centers, grabbing goodies like a kid set loose in a candy store. Google immediately and forcefully denied these claims, and for anyone familiar with the internal structure and dialogues inside Google, these allegations were ludicrous on their face. (Full disclosure: While I have consulted to Google in the relatively recent past, I am not currently doing so.) Even an attempt to enable such access for NSA or any other outside party would have by necessity involved so many engineers and other Google employees as to make impossible any ability to keep such an effort secret. And once known, there would have been very public, mass resignations of Googlers -- for such an intrusion would strike directly at the heart of Google philosophy, and the mere suggestion of such a travesty would be utter anathema to Google engineers, policy directors, lawyers, and pretty much everyone else at the firm. Obviously, Google must obey valid laws, but that doesn't mean they're a pushover -- exactly the opposite. While some companies have long had a "nod and wink" relationship with law enforcement and other parts of government -- willingly turning over user data at mere requests without even attempting to require warrants or subpoenas, it's widely known that Google has long pushed back -- sometimes though multiple layers of courts and legal processes -- against data requests from government that are not accompanied by valid court orders or that Google views as being overly broad, intrusive, or otherwise inappropriate. Over the last few days the public has gained an unusually detailed insight into how hard Google will fight to protect its users against government overreaching, even when this involves only a single user's data. The case reaches back to the beginning of 2011, when the U.S. Department of Justice tried to force Google to turn over more than a year's worth of metadata for a user affiliated with WikiLeaks. While these demands did not include the content of emails, they did include records of this party's email correspondents, and IP addresses he had used to login to his Gmail account. Notably, DOJ didn't even seek a search warrant. They wanted Google to turn over the data based on the lesser "reasonable grounds" standard rather than the "probable cause" standard of a search warrant itself. And most ominously, DOJ wanted a gag order to prevent Google from informing this party that any of this was going on, which would make it impossible for him to muster any kind of legal defense. I'm no fan of WikiLeaks. While they've done some public good, they also behave as mass data dumpers, making public various gigantic troves of usually stolen data, without even taking basic steps to protect innocent persons who through no fault of their own are put at risk via these raw data dumps. WikiLeaks' irresponsible behavior in this regard cannot be justified. But that lack of responsibility doesn't affect the analysis of the Gmail case under discussion here. That user deserved the same protection from DOJ overreaching as would any other user. The battle between Google and DOJ waged for several months, generating a relatively enormous pile of associated filings from both sides. Ultimately, Google lost the case and their appeal. This was still back in 2011. The gag order continued and outside knowledge of the case was buried by government orders until April of 2015 -- this year! -- when DOJ agreed to unseal some of the court records -- though haphazardly (and in some cases rather hilariously) redacted. These were finally turned over to the targeted Gmail user in mid-May -- triggering his public amazement at the depth and likely expense of Google fighting so voraciously on his behalf. Why did DOJ play such hardball in this case, particularly involving the gag order? There's evidence in the (now public) documents that the government wanted to avoid negative publicity of the sort they assert occurred with an earlier case involving Twitter, and DOJ was willing to pull out all the stops to prevent Google from even notifying the user of the government's actions. You don't need to take my word on any of this. If you have some time on your hands, the over 300 pages of related filings are now available for your direct inspection. So the next time someone tries to make the false claim that Google doesn't fight for its users, you can print out that pile of pages and plop it down right in front of them. Or save the trees and just send them the URL. Either way, the truth is in the reading. Be seeing you. --Lauren-- |
A week ago, in my latest discussion of the nightmarish EU "Right To Be Forgotten" (RTBF), titled Just Say "NON!" - France Demands Right of Global Google Censorship, I once again emphasized the "camel's nose under the tent" aspect of RTBF, and how we should have every expectation that Russia, China, and other repressive regimes would make similar demands and attempt to have them implemented as global censorship by Google. Well, that didn't take very long at all. Indeed, Russia (and that means Czar Putin) is on the cusp of a vast RTBF law of their own, that makes the awful EU version look like a picnic by comparison. In the proposed "Soviet" version of RTBF, complainants wouldn't even have to specify links of concern, just vague topic areas. And unlike in Europe, even public figures could demand that Google and other search engine results be whitewashed to remove unflattering or revealing references. Meanwhile, word comes from France that they might want Google to individually track French users wherever they go in the world so that they can be specifically subjected to EU RTBF censorship anywhere and everywhere. "Liberty, Equality, Fraternity?" Hogwash. If this wasn't obvious before, it should be obvious to everyone with half a brain by now -- the power to censor Google and other search engines placed into the hands of governments -- any governments regardless of political orientations -- is the freedom of speech destroying equivalent of handing nuclear weapons to terrorists. Throughout recorded history, governments have wanted to control information, and the technology at hand provides them with a relatively easy means of finally fulfilling that frightful fetish. No matter how much Google and others might try negotiate or compromise in good faith with governments on this topic, the latter will ultimately demand ever more censorship and ever more direct control over search results. There will simply be no end to it. It is absolutely crucial to the future of free speech and broader civil liberties around the world that Google stand firm against the encroachment of ever more damaging and berserk RTBF laws and demands. This is true even if it requires significant changes in some underlying business models. Because ultimately if RTBF is permitted to continue on its current course of escalating censorship, those business models are going to be significantly decimated in much more damaging ways -- damaging to Google itself and incredibly destructive to the global community that depends on Google to do the right thing in difficult situations. Google must obey the law, but it also has considerable latitude in how and where it chooses to operate -- latitude that can be very relevant to the RTBF and other censorship issues now at hand. We're depending on Google to set an example against the pages of useless, lowest common denominator search results that will be the inevitable result of RTBF laws continuing on their present course -- like a tidal wave leaving nothing but rubble in its wake. We're standing at the crossroads of perhaps the most critical information freedom juncture in history since the invention and spread of the printing press. There is no room for error. Do this right, Google. I know you can. --Lauren-- |
This is a difficult discussion for me. It borders on embarrassing, because I'm forced to admit that I was unable to foresee some of the ramifications of encryption-related polices I've been promoting for many years. I could make the excuse that I did not anticipate an onrush of largely hyperbolic paranoia as has been triggered post-Snowden, but I should have realized that some sort of similar event would be adequate to trigger similar issues. My concern started taking shape about a month and a half ago when discussions were bouncing around the Net regarding Mozilla's supposed intention to (at some point in the future) refuse to allow Firefox browser connections to sites not running SSL/TLS, or at least restricting "features" available to them. Mozilla's actual stance on this is currently not entirely clear, but it appears that their longer-term plans at least are moving in this direction. Two days ago when I posted When Google Thinks They're Your Mommy, regarding the Chrome browser's refusal to connect to a major corporate site that other browsers would connect to, I triggered a mass of users sending me similar stories about "encryption enforcement" complications -- and not just about Chrome. There were people complaining about Chrome blocking sites, Firefox blocking sites, new versions of browsers preventing users from accessing network-connected home devices -- that were difficult or impossible to replace and only ran on local networks. On and on. I had let loose an unexpected avalanche, about an issue I incorrectly thought was only now beginning to gradually affect larger numbers of users. And last night I had a nightmare. I saw two parents desperately trying to access a misconfigured medical-related site for their sick child, being blocked by Chrome "for their own protection" -- and then trying to install another browser in panic after being informed by a Google help page that Chrome wouldn't help them, and that using another browser was their only alternative. This is what comes of reading my own blog posts sometimes -- waking up in cold sweat from a very dark dream. What's happened of course is that post-Snowden there's been a mantra that we're all being spied on all the time, and not only should all Internet connections be encrypted, but users should not be permitted -- no matter what the situation -- to access sites that are not encrypted to the standard that the crypto-gurus feel is adequate, even when the situation is triggered by temporary misconfiguration rather than purposeful configuration decisions at a site. The argument is that man-in-the-middle attacks are so powerful and so pervasive (the former can certainly be true, the latter is definitely arguable) that even someone viewing kitten videos must use encryption -- if for no other reason than to protect them from some evil entity injecting an exploit into their weak connection. Obviously if you're Google and continuously transferring gazookabytes of user data between datacenters, you're a big target and you want those circuits to be as rigorously encrypted as is practicable. The reality though is that the overwhelmingly vast majority of user system exploits aren't based on subversion of the connections at all, but rather on endpoint attacks -- usually tied to phishing and other "social engineering" techniques when available multiple factor authentication systems have not been deployed. Like I said, I've spent many years promoting the concept of universal, opportunistic Internet encryption. But in some of the attitudes I see being expressed now about "forced" encryption regimes -- even browsers blocking out fully-informed users who would choose to forgo secure connections in critical situations -- there's a sense of what I might call "crypto-fascism" of a kind. And that worries me. That's the stuff of nightmares. It's one thing for a site to specifically and clearly indicate that it will only accept secure connections of a particular class and quality, proclaiming that it feels such restrictions are absolutely necessary in their context. It's something else entirely though for a browser to unilaterally declare a site's security to be unacceptably weak (perhaps by choice or often by misconfiguration -- both of which we can agree need to be fixed) to the extent that the browser absolutely refuses to allow the user to connect, regardless of how crucial the situation and irrespective of the fully-informed expressed will of the user to connect in any case. Some encryption and Web standards experts might assert that this is simply a situation where a rather technically fascistic attitude is necessary to protect users overall, even if individual users in some circumstances might be horribly injured in the process. I've already had someone quote Spock to me on this one ("The needs of the many outweigh the needs of the few.") Leaving aside for the moment that "the few" at Internet scale could easily still be many millions of warm bodies, I don't buy Spock's supposed logic in the context under discussion here. Yes, we want to encourage encryption -- strong encryption -- on the Net whenever possible and practicable. Yes, we want to pressure sites to fix misconfigured servers and not purposely use weak crypto. But NO, we must not permit technologists (including me) to deploy Web browsers (that together represent a primary means of accessing the Internet), that on a "security policy" basis alone prevent users from accessing legal sites that are not specifically configured to always require strongly encrypted connections, when those users are informed of the risks and have specifically chosen to proceed. Anything less is arrogantly treating all users like children incapable of taking the responsibility for their own decisions. And that would be a terrible precedent indeed for the future of the Internet. --Lauren-- |
Major tech companies are in an interesting position these days. They provide and (one way or another) control most of our communications pipelines, and (quite reasonably) usually wish to encourage maximally effective security and privacy regimes. Certainly Google falls into this category, with world-class privacy and security teams that have been my privilege to work with in the past. But what happens when a firm decides that no matter what the user wants to do, the company will simply not permit it, because they feel so strongly that they know better than the user in a given circumstance? That's what happened to me this morning, and it's a matter of growing concern. I had an important transaction that I needed to conduct quickly on a major corporate website. I access this site several times a week, and I always use the excellent Google Chrome browser. But this time, I couldn't log in. Google refused to permit me to log in to this third-party site, which I needed to access immediately. What was going on? Google was suddenly unhappy about the strength of the SSL/TLS connection being used by this site, and refused to permit access. Presumably there's a configuration issue at that site that really should be fixed, but going down the rathole of trying to explain that to their customer support agents would likely be a twisted exercise that would take hours, with no guarantee that a change would be quickly forthcoming in any case. Yep, that site needs repair, but I needed to access it irrespective of that. Unlike most security certificate warnings from Chrome (and other web browsers) this one had no apparent means of user bypass. This does not appear to be a bug in Chrome, because the associated "Learn more" page essentially said, "We won't help you. Go try another browser if you want. Good luck, guy!" If there was any way to change the browser configuration or otherwise bypass this apparently absolute block, I couldn't quickly find it, and I know my way around Chrome pretty damned well. Because I keep multiple browsers on-hand and current, and have my login credentials always available (not tied to a single browser), I was able to move to another browser and complete the important transaction. However, if this had happened on a smartphone behaving this way with only one browser, or I was using a desktop system that only ran that one browser, I would have been up the creek without significant work to try get another browser going -- assuming I was in a position to do so and that other browsers didn't ultimately move toward exhibiting this same policy. We can certainly agree that weak (or even entirely absent) SSL/TLS connections are to be avoided. In combination with an active "man-in-the-middle attack" or other spying, login or other important credentials and data could be vulnerable. Of course the reality for most of us is that the risks to our important data (financial and otherwise) come from a wide variety of online and offline sources, with SSL/TLS connection compromise being pretty much down near the bottom of the probability list in most cases. But for the sake of the argument, let's assume that a given connection is using weak or even completely broken crypto, and that there is an evil figure monitoring that particular connection at that particular time. Even then, there will be situations where getting through to a particular site can be crucial -- more important even than compromised login credentials that can be later updated, more important even than compromised financial data. Nowadays there are situations where immediate access to a site for information or transactions can be absolutely life critical, overriding individual security concerns. And that is a decision for the individual user. It's not a decision for Google or any other firm to make for a user. Google is not my Mommy. By all means, sternly and clearly warn users of the risks involved in proceeding. Show photos of vampires about to strike, angry-looking kittens, and animations of Godzilla blocking my path. Feel free to force the user to jump through multiple acknowledgment hoops (clear ones, not in fine print or otherwise hidden or obscure) before letting them complete the connection -- sternly emphasize how much you recommend against this course of action! But in the final analysis, get the blazes out of the way and let a consenting adult make their own fully-informed choice about the sites they need to access, without Google (or other firms) treating them like a child or imbecile to be locked in their bedroom without supper. Open means open, and it is not the appropriate role of Google or any other enterprise to impose its view of security to the extent of blocking a user from accessing a legal site when that user feels that they absolutely must do so. If you've informed users of the risks, and they've acknowledged these and choose to proceed despite your sage advice, then that's their decision and responsibility, not yours. And that's the truth. --Lauren-- |
I've been waiting for this, much the way one waits for a violent case of food poisoning. France is now officially demanding that Google expand the hideous EU "Right To Be Forgotten" (RTBF) to Google.com worldwide, instead of just applying it to the appropriate localized (e.g. France) version of Google. And here's my official response as a concerned individual: To hell with this. That's nowhere near as strong a comment as I'd really like to make, but this is a general readership blog and I choose to avoid the use of the really appropriate invectives here. But man, I could justifiably pile on enough epithets here to melt your screens before your eyes. A key reason why I've been warning all along about the disastrous nature of RTBF is precisely this "camel's nose under the tent" situation. Giving in to localized censorship demands from the EU and/or member countries was bound to have this result. What's worse, if France or other EU countries get away with this attempt to impose their own censorship standards onto the entire planet, we can be sure that government leaders around the world will quickly follow suit, demanding that Google globally remove search results that are politically "inconvenient" -- or religiously "blasphemous" -- or, well, you get the idea. It's a virtually bottomless cesspool of evil censorship opportunities. It's bad enough when the ever more censorship and surveillance loving Western leaders have this kind of power. But how about Vladimir Putin, or China's rulers, or Iran's Supreme Leader as GLOBAL censors? It wouldn't be long before it would seem that every search on any controversial topic might as well be replaced with a "404 Not found" page -- a rush to lowest common denominator mediocrity, purged of any and all information that government leaders, politicians, or bureaucrats would prefer people not be able to find and see. I've written and said so much about RTBF for years that it feels like an endless case of "Groundhog Day" at this point -- e.g. early on in The "Right to Be Forgotten": A Threat We Dare Not Forget (2/2012), and most recently in a one hour live RTBF hangout video discussion (about a month ago). And I'm certainly not alone in these concerns. Yet we continue to be sucked down this rathole, now with governments using overblown security concerns as an excuse to try justify even broader search engine censorship across a vast range of topics. So far, Google has resisted the concept of RTBF being applied globally. I not only applaud their stance on this, but I strongly urge them to stand utterly firm on this issue. RTBF even in localized forms is bad, but if countries had the ability to impose their individual censorship regimes onto the entire globe's population, we'd be -- with absolutely no exaggeration -- talking about an existential threat not just to "free speech" but to fundamental communications and information rights as well. This cannot be tolerated. Non! Nein! Nahin! Nyet! Just say NO! --Lauren-- |
Sometimes the world changes around us so rapidly that there's a sense of chaos, until understanding and what passes for equilibrium have been reestablished. Such is the case with videos of law enforcement's interactions with the public, especially videos captured by members of the public and then subsequently made available publicly on YouTube and via other Internet platforms. One thing is abundantly obvious -- many police departments and individual police officers still do not understand that the rules of the "game" under which they operate have been irrevocably altered. Police videos generally fall into two broad categories -- videos shot by police themselves, and videos shot by members of the public. The former include dashcams and increasingly body cameras. The latter is potentially the domain of pretty much everyone with a smartphone these days. Generally speaking, police officers have been warming up to video that they themselves photograph -- especially when they can control when the cameras are running -- which opens up a rather significant can of worms indeed. And while police departments would love to hold onto body camera footage (often taken inside people's homes at times of their greatest distress) for future investigative purposes, they must also face the complexities and expense of redacting videos for public record requests, to quite appropriately protect innocent parties from public abuse and exploitation. Overall though, police are typically pretty eager to trot out the videos when they appear to support officers' accounts of events -- just today a video was released that appears to show that a police shooting of a terrorism-related suspect was seemingly much more aligned with officer statements than with some completely contradictory "witness" statements being promulgated by cable news. The situation tends to be very different with videos of police confrontations photographed by the public, often rapidly posted to YouTube. We seem to be bombarded with an almost daily menu of "cops behaving badly" videos that cover the entire range from somewhat comical, through utterly bizarre, to downright horrifying. To be sure, these videos are self-selected by their uploaders, and there isn't much attention paid to the majority of cops who are well behaved -- and so aren't the subject of many videos overall. Naturally enough, it's the bad eggs and nasty confrontations that are going to get the attention when it comes to videos. But much of what we see in these negative cases is indeed utterly chilling. Cops shooting down unarmed persons, groups of officers beating already compliant suspects to a pulp -- all manner of situations that previously would have had no visual or audio record for review. A new example currently receiving a lot of attention over this last weekend is of a Texas cop running around like a crazed dog, pointing his gun at half-naked teenagers from a pool party, and even tackling an obviously unarmed girl in a bikini to the ground. He appeared to be utterly out of control and his fellow officers seemed to just stand there watching in amusement. The racial aspects of the situation -- the cop was white, most of his targets black -- are difficult to ignore. While apparently nobody was seriously hurt in this case, it's all the more upsetting for what could easily have happened if that gun had fired or that girl's neck had been broken. None of these videos, whether photographed by police or the public, ever tell the whole story. Many police departments allow officers to control when their cameras are running. Videos photographed by the public typically don't show events leading up to a confrontation that attracted attention. Individual cameras only show one point of view. And so on. Such videos are clearly not definitive. It's also clear that many cops still haven't gotten the message made explicit by numerous courts, that the public has the right to capture such videos. We're still seeing officers snatching phones and cameras from people's hands, sometimes smashing the devices, sometimes smashing the face of the photographer, arresting them, pepper spraying them and the like, even though no interference with the police action was occurring. (In an attempt to keep the public's cameras away, some police departments are trying to establish likely illegal "exclusionary zones" around their officers, to discourage public photography of those officers in action). We're also seeing attempts to remove many of these videos from public view -- though tracing who was responsible for such actions can be very difficult. For example, I shared a copy of the "pool party cop" YouTube video link yesterday on Google+. By this morning the video was gone, marked as spam, most likely through a malicious false takedown submission. Of course, this was a failed attempt at information control -- many other copies of that video are now widely available on YouTube and other venues. Perhaps what might seem most odd of all is how we see cops behave so badly even when they're aware that they're being photographed. This makes sense though when you consider that officers are used to having their accounts of events being rarely questioned and almost always accepted. It just hasn't fully penetrated yet that there are now recording eyes on the scene, and that an officer's own rendition of events is but one of the variables of the equation, and often by no means the most reliable of these. Ubiquitous video cameras, smartphones, and YouTube have together fundamentally changed key aspects of law enforcement operations, and the ways in which both courts and the public will view them going forward. There is no turning back. There is no possible return to the previous era of officer reports being accepted with nary a question or concern. Police departments will ultimately learn to live with this. And for the public at large, this is very good news indeed. --Lauren-- |
Finally! There's something that apparently virtually all governments around the world can actually agree upon. Unfortunately, it's on par conceptually with handing out hydrogen bombs as lottery prizes. If the drumbeat isn't actually coordinated, it might as well be. Around the world, in testimony before national legislatures and in countless interviews with media, government officials and their surrogates are proclaiming the immediate need to "do something" about encryption that law enforcement and other government agencies can't read on demand. Here in the U.S., it's a nearly constant harangue over on FOX News (nightmarishly, where most Americans apparently get their "news" these days). On CNN, it's almost as pervasive (though anti-crypto tirades on CNN must share space with primetime reruns of a globetrotting celebrity chef and crime "reality" shows). It's much the same if you survey media around the world. The names and officials vary, but the message is the same -- it's not just terrorism that's the enemy, it's encryption itself. That argument is a direct corollary to governments' decidedly mixed feelings about social media on the Internet. On one hand, they're ecstatic over the ability to monitor the public postings of criminal organizations like ISIL (or ISIS, or Islamic State, or Daesh -- just different labels for the same fanatical lunatics) that sprung forth from the disastrously misguided policies of Bush 1 and Bush 2 era right-wing neocons -- who not only set the stage for the resurrection of long-suppressed religious rivalries, but ultimately provided them with billions of dollars worth of U.S. weaponry as well. Great job there, guys. Since it's also the typical role of governments to conflate and confuse issues whenever possible for political advantage, when we dig deeper into their views on social media and encryption we really go down the rabbit hole. While governments love their theoretical ability to track pretty much every looney who posts publicly on Twitter or Facebook or Google+, governments simultaneously bemoan the fact that it's possible for uncontrolled communications -- especially international communications -- to take place at all in these contexts. In particular, it's the ability of radical nutcases overseas to recruit ignorant (especially so-called "lone wolf") nutcases in other countries that is said to be of especial concern, notably when these communications suddenly "go dark" off the public threads and into private, securely encrypted channels. "Go dark" -- by the way -- is now the government code phrase for crypto they can't read on demand. Dark threads, dark sites, dark links. You get the idea. One would be remiss to not admit that these radical recruiting efforts are of significant concern. But where governments' analysis breaks down massively is with the direction of their proposed solutions, which aren't aimed at addressing the root causes of fanatical religious terrorism, but rather appear almost entirely based on preventing secure communications -- for anybody! -- in the first place. Naturally they don't phrase this goal in quite those words. Rather, they continue to push (to blankly nodding politicians, journalists, and cable anchors) the tired and utterly discredited concept of "key escrow" cryptography, where governments would have "backdoor" keys to unlock encrypted communications, supposedly only when absolutely necessary and with due legal process. Rewind 20 years or so and it's like "Groundhog Day" all over again, back in the early to mid 90s when NSA was pushing their "Clipper Chip" hardware concept for key escrowed encryption, an idea that was mercilessly buried in relatively short order. But like a vampire entombed without appropriate rituals, the old key escrow concepts have returned to the land of the living, all the uglier and more dangerous after their decades festering in the backrooms of governments. The hardware Clipper concept dates to a time well before the founding of Twitter or Facebook, and a few years before Google's arrival. Apple existed back then, but centralized social media as we know it today wasn't yet even really a glimmer in anyone's eye. While governments generally seem to realize that stopping all crypto that they can't access on demand is not practical, they also realize that the big social media platforms (of which I've named only a few) -- where most users do most of their social communicating -- are the obvious targets for legislative, political, and other pressures. And this is why we see governments subtly (and often, not so subtly) demonizing these firms as being uncooperative or somehow uncaring about fighting evil, about fighting crime, about fighting terrorism. How dare they -- authorities repeat as a mantra -- implement encryption systems that governments cannot access at the click of a mouse, or sometimes access at all under any conditions. Well, welcome to the 21st century, because the encryption genie isn't going back into his bottle, no matter how hard you push. Strong crypto is critical to our communications, to our infrastructures, to our economies, and increasingly to many other aspects of our lives. Strong crypto is simply not possible -- let's say that once more with feeling -- not possible, given key escrow or other government backdoors designed into these systems. There is no practical or even theoretically accepted means for including such mechanisms without fatally weakening the entire associated encryption ecosystem, and opening it up to all manner of unauthorized access via hacking and various subversions of the key escrow process. But governments just don't seem willing to accept the science and reality of this, and keep pushing the key escrow meme. It's like the old joke about the would-be astronaut who wanted to travel to the sun, and when reminded that he'd burn up, replied that it wasn't a problem, because he'd go at night. Right. Notably, just as we had governments who ignored realistic advice and unleashed the monsters of religious fanatical terrorism, we now have many of the same governments on the cusp of trying to hobble, undermine, and decimate the strong encryption systems that are so very vital. There's every reason to believe that we'd experience a similarly disastrous outcome in the encryption context as well, especially if social media firms were required to deploy only weak crypto -- putting the vast populations of innocent users at risk -- while driving the bad guys even further underground and out of view. If we don't vigorously fight back against government efforts to weaken encryption, we're all going to be badly burned. --Lauren-- |