Politicians are well known for "blowing in the wind" of the perceived public opinions of the moment, and especially when politicos seem to suddenly change their own stated opinions, it's usually time to figuratively get a good grip on your wallet.
This doesn't apply only to financial matters, either.
The spectacle of congressmen who until recently were gung-ho in favor of pervasive NSA surveillance programs suddenly changing their tunes may seem like a good sign, but there is every reason to be deeply suspicious of where this might lead in the longer term.
It's easy to forget that in the aftermath of 9/11, George W. Bush received astoundingly high approval ratings for his pushing through of the PATRIOT and Homeland Security Acts, which enabled expansive NSA warrantless domestic surveillance and greatly expanded the use of National Security Letters and rubber-stamped actions of the FISA court.
Swings of the "reform" pendulum are especially noteworthy in U.S. history. Just to name three relatively contemporary examples, we saw notable attempts to reign in "secret" activities after the release of "The Pentagon Papers," fallout from President Nixon's "Watergate" mess, and in the aftermath of President Reagan's "Iran-Contra" affair.
Few if any of the resulting "reforms" were long lasting. Over time, everything went pretty much back to "business as usual" for the spooks and their allies, despite snapshot polling showing public support for reforms, and political gamesmanship by politicians at the times of these scandals.
There is every reason to anticipate that any reforms this time around short of major, structural alterations, will also fade over time. If there's a significant new attack on U.S. soil, all bets are instantly off, and we'd likely see large majorities demanding that congress decimate our remaining civil rights in the name of ostensible public safety.
Of particular note today are the politicians who enthusiastically supported Bush-era NSA and other surveillance programs that today they're attempting to condemn under Obama. Their dissembling has been raised almost to an art form, as they weave and squirm and try to claim that their pro-PATRIOT votes weren't what they seem, that they somehow misunderstood what they were voting for -- or argue that Obama has run wild.
The reality is much clearer if you look at their old pro-PATRIOT speeches, and videos of their attacking anyone who dared to speak out against massive surveillance expansions domestically and internationally. On the GOP side in particular, there may have been some true changes of heart, but it's obvious that what's mostly going on is the usual GOP game plan: "Get Obama any way you can, don't let the facts or history stop you!"
With all this as preamble, what about the actual "reforms" now being proposed? Are they meaningful? Would they last even under optimistic scenarios?
It's a decidedly mixed bag.
I'm relatively (emphasis on "relatively") optimistic that we may see useful reforms in terms of "procedural transparency."
We need to know more about what programs NSA and other agencies have in force, and what kinds of information those programs are gathering. That is, stop trying to hide the programs themselves (we're not talking about operational data) from the American people.
Reforms in this area would be more transparency in the FISA court, and especially allowing Internet firms to report on the numbers (at least in terms of numeric ranges) of FISA actions and other data demands with which they are served. Firms like Google, Apple, and Microsoft (telecoms like AT&T and Verizon seem far less interested) have been virtually begging the federal government for the right to explain in broad terms what is actually happening, so that they can fight back against hyperbolic, unsubstantiated, false claims. The government's refusal so far to permit such reasonable reporting is doing genuine and completely unfair damage to these firms, like forcing them to try play baseball on the international stage with their arms handcuffed and their legs shackled.
This is an intolerable situation, created and enforced by the government as a result of callously and hypocritically not trusting the American people to understand national security issues.
Once we move beyond basic transparency to more operational matters, the risks of being suckered by essentially "fake" reforms rise dramatically.
For example, there's much talk now about changing the NSA phone call metadata program so that rather than the government holding the database, it would be maintained by the telcos themselves or perhaps some "independent" third party.
Sounds good at first glance, but given the level of access NSA would likely demand to that data -- no matter where it physically resides -- there's a major chance that this "reform" would in practice be little more than shuffling deck chairs on ... well ... you know.
The upshot of all this is pretty easy to see. Government in general and intelligence agencies in particular have legitimate security and surveillance needs, which historically grow out of control, are pulled back a bit by a swing of the pendulum, but over time seem to always expand in the long term.
And these agencies and politicians -- we will stipulate for ostensible good motives -- have also become experts in playing a gigantic version of "three-card Monte" with the public. Like the old "shell game," we think we know what's going on, but lack of information combined with purposeful diversions conspire to separate us from our money almost every time.
None of this is to suggest that we should not pursue every opportunity for meaningful reforms of NSA and affiliated agencies (and the same can be said of similar agencies around the world operated by other countries). And this is especially true in the case of serious, structural reforms that might have at least a chance of lasting past the next major election.
We've been fooled before -- many times before -- both by disingenuous actions and the simple march of time causing abuses to fade in the public's mind.
We may be unable to escape this same fate today. History suggests that this will indeed be the case.
But we can at least try to prove history wrong this time.
Great Britain had two princes to talk about last week. One was the widely celebrated birth of the new Prince George, a joyful occasion indeed.
The other "prince" -- actually a "clown prince" named David Cameron to be more precise -- was playing the fool somewhat under the radar, and we can excuse the British people for not noticing him as much amid the celebration of George's arrival.
To call Prime Minister Cameron a "clown" at all might reasonably be taken by some as an affront to clowns and jesters reaching back through history. Because Cameron's style of clowning is far more akin to the nightmarish, sneering "clowns" of "B" horror movies, not the bringers of entertainment under the big top.
Cameron, through a series of inane and grandstanding statements and pronouncements both deeply technically clueless and shamelessly politically motivated, has been channeling Napoleon by placing the clown prince crown on his own head.
Laughing at his antics would be a terrible mistake. For his wet dream of Internet censorship poses an enormous risk not only to the UK, but to other nations around the world who might seek comfort in his idiocy for their own censorship regimes (already, calls have been made in Canada to emulate Cameron's proposed model).
David Cameron's buffoonery in this context appears to have started with what (he apparently and wildly inaccurately assumed) was a simple concept -- force UK Internet users to "opt-in" if they wanted to be able to access "pornography" on the Internet (though, as we'll see, his censorship plans have always been much broader in scope).
The politics of the situation seem clear enough -- a specific British tabloid has been pushing hard to promote Internet censorship, and Cameron knows well enough which side of his bread is buttered.
Under his scheme, existing UK Internet users would be forced to choose a "filtered" (that is, censored), or "unfiltered" (sort of unfiltered, anyway) Internet feed. If they didn't choose, they'd get the censored version. New users would be automatically given the censored version, and would have to explicitly opt-in to the uncensored variety.
Perhaps unwittingly and ignorantly, Cameron implied that the vast Chinese censorship apparatus was his model for Great Britain. He managed this bizarre feat by choosing to praise the (currently opt-in for filtering) content blocking system offered by one British ISP, which just happens to be owned and operated by the true masters of Internet censorship -- China itself (and reportedly, virtually all of that ISP's user data routes through the Chinese system, whether individual subscribers have chosen to activate the content controls or not).
Cameron wants this Chinese-style filtering to instead be the default, and you'd have to explicitly ask (and perhaps, find yourself on some rather interesting government lists!) to be exempted from Cameron's Content Censorship.
That is, to the extent you were allowed to escape -- because Cameron's plans are extraordinarily extensive, as we'll see in a moment.
Almost instantly when he announced his proposal, negative reactions and questions started bubbling up from all quarters.
Proxies, VPNs, and the like could easily evade such a filtering regime. Did Cameron plan to try block those, too?
How does Cameron define pornography? Soft core? Hard core? Images only? Literature? Would sex health information be blocked? What of sources that have a wide variety of imagery, only some of which is sexually oriented? Tumblr? Google Images? Flickr? What about written works like "Fifty Shades of Grey" -- or "Lady Chatterley's Lover" or ... ?
What would happen in homes where one adult wanted to privately look at such materials and the other didn't even know about it? Will they have to fight over the censorship setting?
Who will actually make and maintain the block lists? The UK government? The Chinese content filtering company? Rupert Murdoch?
To most of these and virtually every other related question, David Cameron's response has amounted to Alfred E. Neuman's classic tagline from "Mad Magazine" -- that is, "What - Me worry?" (in other words, "Golly, I dunno!")
But he (Cameron, not Alfred) hasn't been completely devoid of additional information.
It quickly became clear that his plans for Internet censorship went far beyond his "opt-out" model for porn filtering, to include a wide array of other material that would be banned entirely -- no exceptions.
Cameron has proposed total blocking, both at the site and search engine keyword level, for whole ranges of other information, including what he defines
He proposes that access to such sites, or attempts to search on related keywords, would be blocked and return pages explaining that the related materials were unavailable, or illegal, or ... whatever. And again, it's hard to believe that the government wouldn't want to keep track of who was making those queries for any reason, which of course could also include researchers, reporters, and others with completely benign motives.
The "slippery slope" aspects of Cameron's censorship cadenza are obvious. What he's demanding is nothing less than total control over Internet site access, and micromanagement of search engine results.
Cameron even ran into trouble with what he no doubt thought was a politically safe expansion of "forbidden" content -- suggesting that "rape imagery" should have the same status as child abuse materials -- illegal to produce or possess.
While virtually nobody argues against efforts to control the abhorrence of child abuse and associated imagery (though attempts to block such images, like all censorship attempts, tend to push them further underground and may make them even more difficult to monitor by authorities), Cameron likely was surprised by the pushback against his "rape imagery" criminalization proposal -- from both males and females -- with some of the strongest denunciations from the latter.
It was quickly noted that studies have shown virtually all such material is simulated, that no correlation with actual rapes has ever been demonstrated, and that (as politically incorrect and inconvenient as this undoubtedly is) "rape fantasies" broadly defined have been shown to be common among normal persons both male and female. Observers expressed concerns about such censorship efforts driving this category dangerously underground as well (with some suggestions that this really represents an attack against the consensual BDSM community more than anything else).
This is a particularly uncomfortable and disquieting subject to be sure, but it's dangerous in the extreme to let our emotions get in the way of logical thinking when it comes to censorship (or anything else, for that matter) -- even though politicians disingenuously depend on our permitting our glands to override our brains.
In the end, we're faced with only two major, reasonable possibilities when it comes to David Cameron's Internet censorship agenda. Either he really hasn't thought most of this through, especially considering his repeated expressions of ignorance regarding details, practicality, or impacts and collateral damage -- or he's simply being a blatant political opportunist, who knows full well that he's proposed an Internet censorship regime that would gladden the heart of pretty much any tyrant, anywhere, and is likely to give encouragement and comfort to repressive governments around the world today.
Cameron and his Internet censorship docket are of course a matter for the British people to deal with as they feel appropriate. If vast and pervasive (though counterproductive and ultimately ineffective) attempts at censorship are what you want, your prime minister appears more than happy to provide them.
If that's not what you want ... well ... you know how to deal with PMs who are full of themselves.
But over here across the pond, my main concern is that David Cameron's nonsense will inspire other political clown princes to try ply similar brands of oppression against free speech, and that's a scenario best restricted to actual nightmares, not waking reality.
So in this case, please -- don't send in the clowns!
I just hope too many of them aren't already here.
If you have a rooted Android device, I recommend against rushing to install Android 4.3 for now. It appears that 4.3's new protection model may require re-rooting devices in various situations (and require a new, rather kludgy su workaround for now at least), and unless you have some reason to push through 4.3 quickly (which is a relatively minor update in most other respects) I would suggest holding off until best practice procedures have been developed and promulgated. If you don't root your devices, you won't care about this, and you can jump to 4.3 immediately and happily.
With further confirmation of the longstanding rumor that the U.S. government (and, we can safely assume, other governments around the world) have been pressuring major Internet firms to provide their "master" SSL keys for government surveillance purposes, we are rapidly approaching a critical technological crossroad.
It is now abundantly clear -- as many of us have suspected all along -- that governments and surveillance agencies of all stripes -- Western, Eastern, democratic, and authoritarian, will pour essentially unlimited funds into efforts to monitor Internet communications.
This goes far beyond the targeted wiretaps of yesteryear. It is now a fundamental doctrine of surveillance religion -- bolstered by anti-terrorism hysteria and opportunism -- that it is the purview of government to capture and store virtually all communications, for both real-time and ideally retrospective analysis on demand.
The rather Orwellian mindset of these agencies and their minions is clear -- they don't even consider such vacuuming of data to be eavesdropping until a particular target is in focus for actual, detailed inspection.
And they especially don't like having to go "hat in hand" to Internet services asking for specific data, since many of these services have the annoying (to the spooks) habit of pushing back against overly broad data requests.
So it should come as no surprise that intelligence efforts in this sphere have become ever more focused on compromising the underlying encryption frameworks, permitting potentially comprehensive access to data via Deep Packet Inspection (DPI) and other techniques directly from high traffic interconnecting communications channels themselves.
Whether or not such behavior can be justified from valid national security, public safety, or other grounds -- vs. the damage done to civil rights in the process -- is a policy and political question, not a technical one per se.
But as technologists, I believe that we now -- more than ever -- must start coming to grips with an unpleasant truth.
Public-key cryptography as we know it today may be rapidly approaching the end of its useful lifespan.
The red flags have been popping up all over.
We've seen serious compromises of encryption certificates and certificate issuing authorities, increasing concerns about the security of widely used cipher algorithms, and a range of other associated exploits.
But even then, it's all too easy not to see the forest for the trees.
We quickly lapse into arguments about RC4 and AES, Perfect Forward Secrecy, active vs. passive attack models, and a virtual cornucopia of other crypto slang to gladden our geeky hearts no end.
Yet just as we now know that the essentials of public-key (asymmetric) crypto were secretly developed by the UK's GCHQ several years before the publication of Diffie and Hellman's work, it is prudent to at least assume that intelligence agencies around the globe may still be working several steps ahead of public "state of the art" in crypto tech -- including the means to subvert widely used mechanisms.
This seems especially true given the apparently massive and bloated influx of funding and other resources being provided these agencies for ostensible anti-terror and "cyberwar" projects of enormous (and mostly secret) scopes.
To be sure, there are many balls in the air. For example, we don't really know the extent to which governments may have forced the hands of chip manufacturers to include "special goodies" for surveillance purposes. It's easy to dismiss such ideas as unlikely -- but given recent events, discounting them entirely would seem problematic.
Similarly, we know that when governments really want to target someone, they'll find some way to compromise the associated computers directly -- either through phishing or other malware attacks, or via in-person "black bag" jobs to physically alter systems as they might feel appropriate.
So specific targets -- justified or not -- probably don't have much of a chance.
Still, as technologists concerned about the fundamental security of the Internet against massive, untargeted data collection -- if only to help protect our data from illicit attacks if nothing else -- I believe it would be fully appropriate for us to be considering alternative methodologies for data protection that are sufficiently outside the existing public-key "box" to provide citizens and consumers alike a higher degree of confidence that their legitimate and appropriate communications will be free from unwarranted and unreasonable interceptions by any players, foreign or domestic.
To be clear, this is not to assert that targeted, justified intercepts should not be possible under appropriate and realistic court supervision.
However, massive, unfocused, prospective data collection by agencies around the world is much harder to justify, and vastly more subject to potential abuse.
The individual paths at this crypto crossroad may not be clearly marked. But the route we choose to take may be among the most important decisions not only of our lives, but for generations to come.
Since the latest ramping up of concerns regarding the entire gamut of issues surrounding the surveillance activities of USA law enforcement and intelligence agencies (and by extension, similar activities conducted by the equivalent agencies in other countries), untold millions of words on these topics have been spoken and written, including some thousands of my own.
And -- all over the world -- we can argue pretty much until the cows come home about what was legal, what wasn't legal, how much government leaders knew or should have known, and why members of the public paid so little attention to laws openly passed that specifically authorized such activities.
Here in the USA, public observers of the PATRIOT and Homeland Security Acts should have been well aware of what FBI, NSA, and other agencies were being ordered to do by Congress -- despite current dissembling by some of those laws' authors. We warned of these programs at the time. And odds are, most or all of these activities will ultimately be declared legal by the courts.
It can be argued that one clear benefit of Edward Snowden's leaks was to confirm in some detail (significantly diluted by various related wildly hyperbolic claims and erroneous interpretations) that such programs in one respect or another had actually been implemented and deployed.
Yet this still doesn't get us to the heart of the matter, to the foundational questions we must ask ourselves no matter what our nationality and personal feelings about our own nation's intelligence operations.
To really get there, we need to revisit a concept made famous in the classic film "Animal House" from 1978 -- "double secret probation."
As the character of Dean Wormer explained -- freely paraphrased by yours truly -- double secret probation is essentially the condition of not even knowing that you are under suspicion, and that you are the subject of continuing investigation outside the normal context of traditional law enforcement activities.
Of course, Wormer was talking about a rather freewheeling fraternity, not the American people in general. But the analogy still seems apt, and is at the heart of our dilemma regarding government surveillance.
First, only an utter fool would argue that there is no need for any surveillance in any context, given the real world of terrorism, black market fissile materials, and all manner of other genuine threats.
Secondly, stipulating that actual threats do exist is by no means to agree that every asserted threat being waved as an excuse for surveillance is genuine or not overstated, especially given the vast amounts of power and money deeply entangled with agencies' adversarial claims.
And at the core of our attempts to harmonize these two realities is our friend double secret probation.
For it is frequently not the existence of government surveillance per se that is so problematic, it is the deployment of such surveillance without the public being clearly and definitively aware that the surveillance is taking place, rendering us impotent to fulfill our oversight of government that makes all the difference between democracy and tyranny.
Significantly, many of our government leaders have put themselves into the role of Dean Wormer -- and placed their entire citizenry on "double secret probation" -- not trusting the people to appropriately judge the actual threats or to accede to a level of surveillance activities that can be reasonably justified.
It is precisely this attitude -- again even more so than the actual surveillance much of the time -- that is so unacceptable and insulting to us all.
Once this attitude has taken hold, it tends to spread and permeate legislators and other government officials, who become convinced that only they are capable of making these decisions, that the people cannot be trusted to even know what's in their own best interests.
So we see spectacles like major Internet firms begging for the right to even explain in broad terms what sorts of information demands are actually made of them by governments, and Kafkaesque legal arguments by agencies attempting to prevent the public from gaining any practical sense of the full extent to which telephone and Internet systems are under metadata and/or content-based observation and data collection.
Some hardcore surveillance proponents argue that more transparency in these realms would compromise the efficacy of their information gathering efforts. And they may be correct, to one extent or another -- in various specific situations.
But simply being "correct" in this context is not enough. In fact, all of government -- at least democratically oriented governments -- must ultimately be based on compromise (a fact apparently forgotten by many participants, to be sure), and this means that even surveillance based on seemingly laudable motives must take a back seat to the people who are supposed to be driving this bus -- the public itself.
Appropriate transparency about surveillance doesn't mean revealing deep operational details, but it does require making sure that the public understands what is actually being done.
If you want to collect our telephone calls, Internet, and other transactional metadata and/or content, then make your case -- we the people will make the decisions. We're the ones who pay your salaries. Your positions exist with our concurrence, not the other way around. We are, frankly, at least as intelligent as you are.
And if the result of the transparency we demand is that you cannot achieve quite the level of all-encompassing surveillance of which you dream -- so be it.
In the name of democracy, and here in the USA the Constitution and civil rights, you must come to terms with the fact that imperfection in surveillance is part and parcel with the fundamental precepts of our nation.
So to law enforcement, intelligence agencies, and Congress -- discuss with us the kinds of surveillance you feel are genuinely needed. Treat the American people as your partners, not as your adversaries or village idiots.
If your arguments are valid, we'll back you to the hilt.
But we demand balance not banality, reasonable transparency not legal trickery.
We're all in this together.
And with all due respect, to use the vernacular -- please stop jerking us around.
Addendum (8:24 PM): After a flurry of articles today (including the posting below) critical of Yahoo/Tumblr handling of adult material, Yahoo/Tumblr are now saying that this was all caused by a complicated series of bugs and misunderstandings. Maybe you can figure it out.
Exactly two months ago, when we heard that Yahoo was buying Tumblr for over a billion dollars in cash, I posed a somewhat provocative question.
To wit: What was Yahoo gonna do with all that porn on Tumblr?
Despite the continuing insistence of senior Yahoos and Tumblr's now very wealthy and apparently very pliable young creator that "nothing would change" -- that never seemed like a probability worth really considering.
After all, the adult-themed sites on Tumblr range from soft-core to stunningly serious sleaze, and one would assume that Yahoo wouldn't want to upset their advertisers (whom young Mr. Tumblr has recently been praising profusely) with such riffraff.
I'm not an aficionado of this stuff myself, but back in May I made a prediction and offered a suggestion.
The prediction: "My guess is that Yahoo will be subtly working to drive out those 'troublesome' aspects of the Tumblr user base over time -- one way or another -- ideally before the first big public blowup in the 'Yahoo era' over Tumblr content."
The suggestion: "But if I were a Tumblr user with content that was, shall we say, considerably divergent from the mainstream, I'd be starting to look around right now for a different place to host my stuff, and some new URLs to forward over to good ol' Uncle Ernie."
It now appears that Yahoo and Mr. Tumblr have validated both of these statements.
Rather than explicitly banning adult materials per se, Yahoo is in the midst of a full court press to bury them all in a "red light district" in the deepest, darkest corner of their data centers, ideally in locations where cooling unit condensation will drip directly onto the servers and render associated data unreadable as soon as possible.
OK, they haven't gone the condensation route yet, but Yahoo wants Tumblr adult content out of sight, out of mind, and out of search engines.
They're taking a number of approaches, none of which have an obvious equivalence with the promised "nothing will change."
Perhaps of most concern, Yahoo is using the robots.txt convention to tell external search engines like Google not to index any sites that Yahoo/Tumblr now considers to be "adult content." This is despite the fact that a check box still apparently exists through which adult Tumblr sites can indicate that they do want to be indexed by search engines. Apparently, this is now a no-op -- a lie.
And now Yahoo has dropped the hammer on tags and internal Yahoo/Tumblr search, reportedly cutting these sites off from these as well.
The upshot of all this is that for most practical purposes, if you don't already know the URL of an associated site in the Yahoo/Tumblr dripping water red light district, you're unlikely to find the site at all.
Yeah, "nothing will change ..."
Now, none of this is to suggest that Yahoo doesn't have the right to determine what sorts of material it wants to host, even though their ongoing actions appear to fly in the face of their previous public assurances.
But playing fast and loose by blocking access by outside search engines seems particularly insidious and hypocritical.
If you don't want to host the sites at all, fine. Tell them to leave and send your advertisers some "mission accomplished" cupcakes.
But it's basically evil to say that you'll host the sites and then refuse to let those sites be indexed by outside search systems. They should be permitted to be indexed as normal, and users of those search engines who do not wish to see adult results can avail themselves of the adult content controls offered by those search engines themselves.
Yahoo and Mr. Tumblr appear to want it both ways. That's pretty sleazy in a way that makes even most hardcore porn sites look pretty tame by comparison.
So, yep, like I said before, I'm not a fan, but if you have an adult site on Tumblr, you need to be looking for a new home as soon as possible -- if not yesterday.
In the meantime, it might be interesting to see what Yahoo/Tumblr would do if you started posting summary information and links to your Tumblr sites at other locations that are not blocked from search engine indexing. Perhaps a bunch of Tumblr site owners affected by Yahoo's cleansing could get together and create a meta-site specifically for this purpose -- just a place for search engines to find you and pick up the URLs users need to reach your Tumblr sites directly.
Appropriately label your content of course -- the idea is to be indexed, not to force anyone uninterested in your materials to view them.
Naturally, if something like this started happening on a large scale, Yahoo and Mr. Tumblr probably wouldn't take it sitting down.
I wonder what "nothing will change" they'd try next?
Addendum (8:24 PM): After a flurry of articles today (including the posting above) critical of Yahoo/Tumblr handling of adult material, Yahoo/Tumblr are now saying that this was all caused by a complicated series of bugs and misunderstandings. Maybe you can figure it out.
A Massachusetts State Police Sergeant, upset with a "Rolling Stones" magazine cover that some misguided observers felt "glamorized" Boston bomber Dzhokhar Tsarnaev, has reportedly himself violated his oath, undermined his agency, potentially put evidence in the bombing investigation at risk, and demonstrated exactly how bad judgement results in photos on the Internet that can haunt innocent parties forever.
The current edition of "Rolling Stone" features a photo of Tsarnaev's face, much as the magazine displayed a photo many years ago of Charles Manson. The point of the image was obvious. This ordinary, good-looking youth, appearing much like anyone else his age, somehow became a monster. Monsters usually don't look like monsters. That was the whole point. Rolling Stone got it exactly right, both in the photo and their accompanying text and article.
But of course, getting it right isn't good enough today. Immediately there was a massive clamor of protest from various agitators -- most of whom appeared to have not even read the article (perhaps incapable of reading words with that many syllables?) -- screaming that the photo "glorified" Tsarnaev. Protests and boycotts were instantly announced -- major store chains rushed to announce they'd refuse to carry the issue, and stupidity reigned in the spotlight yet again.
Then, a Mass. State Police Sgt., who had access to vast numbers of potentially evidentiary and other photos (many of them rather gruesome), related to the case and who had been specifically ordered to treat them as confidential materials, decided on his own to release them to the media to provide what he felt was balance to his own distorted viewpoint of the Rolling Stone photo.
In doing this, he blew it big time. His future in law enforcement is now in question, as well it should be.
Not only has he potentially destroyed the evidentiary value of those photos -- lawyers are already chomping at the bit on this one -- but he has demonstrated exactly how people with personal agendas result in damaging imagery on the Internet that can never be withdrawn.
We've seen this again and again with law enforcement and other first responders. Gruesome photos of accident victims, innocent parties who happened to be present at crime scenes, and all manner of other photos taken by officers and others by virtue of their official presence at a scene, then dumped onto the Internet (either directly or through third parties as in today's case) where they will multiply forever, and contribute to dangerously misguided calls for Internet censorship and micro-management of search engine results.
It's the source of these photos in the first place that is the problem, and while it's easy to say that Tsarnaev isn't a sympathetic example -- and he's not -- the violation of official duties and oaths represented by the unauthorized release of such photos in any context is a matter of great concern, that undermines faith in police agencies and emergency responders in general.
Perhaps this officer thought he was some sort of Edward Snowden with a badge, out to personally right the wrong that he fantasized Rolling Stone had committed.
But even apart from the potential damage this officer has done to an important case against a mass murderer, by taking the release of such materials into his own hands, he suggests to everyone that law enforcement cannot be trusted to maintain control over sensitive photos and other information, and the damage he's caused in that respect may be impossible to overestimate.
A perennial question in Computer Science has nothing directly to do with code or algorithms, and everything to do with people. To wit: Why don't more women choose CS as a career path?
As a guy who has spent his entire professional career in CS and related policy arenas, this skewing has been obvious to me pretty much since day one.
It's not restricted to educational institutions and the workplace, it's also on display at trade shows, technical conferences, and even on social networking sites of all stripes.
And despite the efforts of major firms to draw more women into this field with some relatively limited successes, the overall problem still persists.
All sorts of theories have been postulated for why women tend to avoid CS and the related computer technology fields, ranging from "different nurturing patterns" to inept school guidance counselors.
But I suspect there's an even more basic reason, that women tend to detect quickly and decisively.
The men of computer science and the computer industry are misogynous jerks.
Not all of them of course. Likely not even the majority.
But enough to thoroughly poison the well.
This goes far beyond guys crudely hitting on women at conferences, or the continuing presence of humiliating "booth babes" at trade shows.
The depth to which this pervades has been especially on painful display on the Web over the last couple of days, relating to a very important operating system technical discussion list.
Since I don't want this to be about individuals, we'll call the person at the focus of this list by the label "Q" -- after the supercilious, intelligent, arrogant, omnipotent character from the "Star Trek" universe. Not evil per se -- in fact capable of great constructive work -- but most folks who come in contact with him are unwilling to risk the wrath of such a powerful entity. Indeed, an interesting character this Q.
Back here in what we assume is the real world, the current controversy was triggered when a female member of that technical discussion list publicly criticized "Q" and what we'll politely call his "boorish" statements on the list -- causing at least one observer to note that it was the first time they'd seen anyone stand up to Q that way in 20 years. This woman -- by the way -- is the formal representative to the list in question from an extremely important and major firm whose technology is at the heart of most personal computers in use today.
The particular examples she cited were by no means the most illustrative available -- aficionados of the list in question realize she was showing admirable diplomatic tact.
But while reactions to her statements in the associated list thread itself can certainly be described as interesting, many of the reactions that have appeared externally in social media can only be described as vomit inducing.
I can't even repeat many of them here, but just a sampling I've seen and/or directly received:
- "Nobody told her she had to work with Linux, get off the list!"
It was getting so bad that I had to shut down comments on two discussion threads last night before going to bed to avoid their turning into rancid cesspools in my absence -- and I wasn't the only one who had to take that action.
One might argue that all this isn't unique to computer science and the broader computer industry, and you'd be correct. This kind of "boys will be boys" sexism pervades our culture and in fact has driven many women into refusing to even identify as female in social media or discussion lists at all.
But the "it's not really important, and everybody's doing it anyway!" excuse is utterly bogus.
While we may not be able to change these attitudes in the culture at large, we can at least take steps to clean up our own house, to try bring a basic level of civility to our own work in these regards.
But first we need to admit that the status quo is indeed unacceptable, and many in our community's "good ol' boys club" are currently refusing even to go that far.
The technical and policy issues we're dealing with are far too crucial to permit them to be distorted by juvenile, sexist, and loutish behavior that discourages maximum practicable inclusion and participation.
And rather than acting as tacit examples of bullying that help feed even worse abuses, leaders in our technical community should be taking the responsibility to be examples in public -- if not of exemplary behavior -- at least of basic politeness.
If people want to be jerks in their private lives, that's up to them. But keep your bad behavior and sexist crap out of our work.
And that goes for you, me, Q, and everyone else as well.
It was only late last year that freedom-loving Internet users around the world were transfixed with concern regarding a possible United Nations takeover of the Internet -- largely pushed by Russia and other repressive regimes.
A massive effort to fight back against this was triggered, including this strong campaign by Google, which I supported.
The threat from the UN's International Telecommunications Union (ITU) was fought back for the moment -- and we all gave a sigh of relief.
But now, in a clear demonstration that actions do have consequences, often unintended ones, "The New York Times" reports that Russia is again demanding a UN Internet takeover of exactly the sort repressive governments around the world have long been lusting after, and using Edward Snowden's continued presence in Russia as a foundation for this new thrust.
Acting as a catalyst for a crackdown against freedom of speech on the Net was certainly not Snowden's intention -- quite the opposite, it's reasonable to assume.
But even many of Snowden's most dedicated supporters have seemed increasingly uneasy at his continuing presence in Russia, under at least the putative control of Putin -- in a country where you can spend years in a forced labor camp prison for the crime of blasphemy, and where freedom of speech is still largely an unfulfilled dream.
And while Snowden's supporters and Snowden himself suggest -- with considerable merit -- that the focus should be on global intelligence agencies and not on Snowden, the fact is that the way events have unfolded, Snowden has become the center of attention, and continues to be in the spotlight.
This puts him -- we can assume unwillingly -- in something of the position of an "international pawn" to be played by the various powers with their complex agendas, like icebergs, mostly hidden below the surface.
It may well be the case that Snowden saw no practical alternative other than fleeing to Russia and asking for asylum there. This course of action may yet well serve his needs.
But it would be naive for anyone -- for any of us -- to assume that Russia would not attempt to leverage a situation like this for their own purposes of Internet control. Whether or not they succeed is a wholly different question, and all of us will have a say in that, one way or another.
Yes, planned or not, incidental or not, actions do have consequences, and it would be ironic indeed if Edward Snowden's stated quest to promote the cause of freedom around the world, had the unintentional effect of helping to crush Internet freedoms at the hands of his benefactors of the moment.
Direct YouTube Link (~1.25 hours)
Related blog posting: Google's New +1 Sharing Has Some Issues, but It's Not a Privacy Problem
Google made some significant changes to the way sharing works on Google+ today, and all day long I found myself being pulled into conversations discussing the topic, some of whose participants seem upset to the level of semi-ranting ("semi-ranting?" Is that anything like "semi-pregnant?" But I digress ...)
The essence of the changes, which are being announced to users via an initial info pop-up, is that when a G+ user +1s a G+ posting, that activity may now result in that posting being shared out to other users' home G+ streams. I say "may" because there appears to be some sort of (publicly undefined) algorithm in play determining which +1 actions will be "highlighted" (shared) in this manner.
It's important to note that this is definitely not "frictionless sharing" of the sort that has become negatively associated with, for example, Facebook. Merely accessing a post for reading will not share it via this new capability. It takes an explicit action of endorsement -- a +1 -- to trigger a possible new "implicit" share.
And in fact, a form of this has existed for quite some time on G+ for +1 shares on external sites. When +1 buttons are embedded externally, they normally note that they will both +1 and share the page to G+.
So, what Google has essentially done (and this is all based on what I've seen publicly today) is extend the definition of +1 sharing within G+ itself, subject to whatever limiting factors are imposed by that algorithm mentioned above, and additionally by user controls.
And in fact, specific user controls over this have been provided, though arguably in a manner that raises some concerns over potential "information overload."
Users can control if, and how widely, their G+ +1 activity will be eligible for sharing under this new system. By default, this capability is enabled, and set to fairly broad conventional sharing in the context of G+ -- that is, extended circles (people you follow, plus people they follow, subject to visibility restrictions).
The controls allow dialing this down all the way to no new +1 sharing at all, if desired.
However, this is apparently strictly a user outbound sharing control.
From the inbound standpoint -- that is, how many of the new +1 shares from other people will you see -- there apparently is no new specific control, but the normal G+ circles "volume slider" controls apply to the union of conventional sharing and new +1 sharing from any given user -- this is essentially the new definition of G+ sharing per se in this context.
A valid question raised about this -- particularly since the new +1 sharing defaults to enabled -- relates to users being fully cognizant that their +1 sharing may now impact other users' streams, and is the lack of a means for recipient users to explicitly choose between conventional shares and new +1 shares appearing on their home streams a potential problem?
The answers would seem to be quite situation dependent. One obvious factor is how much +1 activity is occurring within the scope that could impact any given user's stream. For some users, the impact could be quite significant. For others, not much at all.
I also saw a concern expressed today by someone worried about "inappropriate material" appearing in their home stream via the new +1 mechanism. Keeping in mind that we're talking about the inbound stream that a user sees, not what they're sending out to their followers, the issue here appears to relate mostly to a concern that Not Safe For Work shares, triggered by the new +1 system, might appear on a workplace computer stream and cause problems for the associated user with their employer.
This is not an outrageous concern by any means, though I wonder how often it would actually be an issue. Perhaps more to the point, if you're following someone who +1s items like that, they may also be likely to directly share such materials anyway.
But frankly, I don't think anybody really knows yet if users will differentially evaluate their direct sharing vs. "automatically shared via +1" activities, especially once this new modality has become more broadly understood.
All that said, my initial impression of the overall situation is that providing users with an explicit toggle or finer grained control allowing them to express some form of "I'd rather not see other users' +1 shares in my stream" directive is a concept that is worth considering.
And while there have been multiple invocations of privacy concerns today by some observers regarding the new +1 sharing system, I simply do not see any valid privacy-related issues.
The new +1 system explicitly does not extend beyond the previously existing G+ permission model. G+ postings that were not previously visible to any given user are still not visible to that user and aren't impacted by the +1 changes.
There has been some concern expressed that the new +1 environment may allow other users to discover +1 activity on postings that they might not otherwise have known about unless they "stumbled" into them.
In fact, I would argue that a major aspect of the new +1 system is to encourage such discovery within circles and extended circles, and that this is a very positive aspect. A +1 action on a public posting is by definition a public act -- it has been since the start of G+. Attempting to impose what amounts to "granularity" onto public acts of this sort -- seemingly hoping futilely for a kind of "privacy through obscurity," just doesn't make sense anymore -- if it ever did (which I personally greatly doubt was ever the case).
So ... where does this leave us? I suggest restricting to private postings those materials and actions (including +1s) that you don't feel comfortable being actually public.
Be aware when you +1 that you may be sharing as well, and if you're uncomfortable with that responsibility for any reason, use the G+ controls to limit or disable the new +1 sharing capabilities for your account.
If you find you're getting what you feel are too many +1 shares from particular users, you might consider moving them into a separate circle and "turn down the volume" for that circle, in lieu of a specific control currently being available to enable or disable reception specifically of +1 shares.
Perhaps most importantly of all, remember that this is new feature -- it's not engraved in granite. If you have feedback about this (positive, negative, or both) please do use the G+ feedback forms -- they really do matter.
And as always, I welcome your comments and other thoughts.
It would be just swell to wake up one morning, check the news, and not feel like I was trapped in a 21st century version of the film "Groundhog Day" or some sort of twisted parallel universe where the Mad Hatter rules supreme.
But it appears that we're still doomed to endless replays of "I Got You Babe" and calls of "I want a clean cup!" into the foreseeable future.
To wit, one need only note the new statements of feigned outrage from the EU, complaining about Google's privacy policies, generally similar in focus to those we've seen from the FTC here in the USA at various times.
These are much the same European countries, we note with bemusement, who have either been revealed to be running (or can be assumed to operate) major communications surveillance operations against their own citizens and international traffic, in manners similar to those that have brought our own NSA into an unwelcome (for the agency itself, anyway) spotlight.
Over the last few days we've heard much about the UK's massive comm spying ops, and just yesterday France's own vast efforts along these same lines were revealed. Sacrebleu!
And if anyone really believes that Germany and pretty much all other western countries with sufficient resources -- plus our old friends like China, Russia, and the rest -- aren't engaging in the same "Spy vs. Spy" routines, there's an old bridge over the East River you might wish to consider purchasing.
The reality is that even while governments profess outrage over spying from other countries, their own surveillance systems are hungrily sucking up everything they can get their hands on.
This isn't new at all.
But in the past, this kind of hypocrisy wasn't generally used to try damage major corporations like Google that serve consumers' privacy and other interests.
I'm not at all a fan of conspiracy theories. Yet it isn't necessary to believe them to smell a smokescreen of government misdirection aimed at diverting attention from enormous personal privacy abuses by governments, through repeatedly trying to scapegoat Google policies that not only don't do damage to consumers, but actually serve consumers' key interests far better than government does nowadays.
So while governments are vacuuming up phone, mail, and financial records, all personally identifiable, we see public attacks by government on harmless, anonymous ad personalization systems, benign browser cookies, and accidental collection of harmless data from open Wi-Fi systems.
If one were actually of a more conspiratorial bent, one might even ponder if Google's forceful pushback related to overly broad government data demands and other associated actions inspired some parties to promote the hyperbolic and discredited (though still widely repeated in the press) false claims of the NSA "PRISM" program having direct access to Google servers.
But again, we need not invoke conspiracies, nor even a high degree of government coordination, to explain this kind of government hypocrisy.
For hypocrisy is indeed as old as governments themselves -- and diversionary smokescreens are a tried and true technique to be sure.
The difference this time, is that the magnitude of public revelations about government surveillance programs around the world -- that we always knew existed but were rarely spoken of in public -- allows us to more clearly see the scope of hypocrisy aimed at Google by USA and EU regulators and politicians.
And that clarity may help make this Mad Hatter Groundhog Day universe a bit more understandable at that.
Clean cups, anyone?