August 01, 2015
Sadly, How Windows 10 Reveals Microsoft's Ethics Armageddon
Over the last few days, I've been discussing various problematic issues involving Microsoft's new Windows 10 operating system, most recently in:
Windows 10's New Feature Steals Your Internet Bandwidth:
But today I'm not getting into technical details, but rather pulling back our camera a bit for a wider view of what Microsoft seems to be doing -- and unfortunately it's a very sad commentary indeed.
I'm not being facetious. There have been and still are many great people at Microsoft. Bill Gates and the company he founded contributed mightily to the development of the personal computer industry and much that subsequently evolved.
It's clear though that MS is at a crossroads, at a point of existential importance to the entire firm.
The market for consumer-level operating systems as items to be purchased has rapidly dried up. Microsoft's foray into hardware has -- we can charitably say -- been less than impressively successful.
So it's not a surprise that MS has explicitly and publicly been remaking itself as an Internet services company -- a logical decision given the cards MS now has available to play.
Yet much as Microsoft was a bit late to realize the Internet's importance many years ago, they're again late to the game, and the pressures they feel are obvious to any perceptive observer.
All of this can help us to understand -- but not to excuse -- the ethical collapse that Windows 10 appears to represent for a once great company.
And yes, this is very much a matter of ethics, in much the same vein as bait-and-switch artists and underhanded used-car salesmen of popular lore.
These various players -- including Microsoft in their handling of Windows 10 -- share a common defining characteristic, a shared ethical flaw.
They avoid being up-front and honest with consumers.
The irony is that these ethical lapses are so easily avoided.
If the bait-and-switch artist was honest about what they actually wanted to sell, they're in the ethical green zone.
If the used car salesman was direct about flaws in the vehicle on display, there's no ethical complaint to be lodged.
The same would apply to Microsoft.
MS seems to be failing at ethics even in some of the more minor areas -- with word that the popular old Solitaire game from Windows 7 and earlier has been replaced on Windows 10 with a version that forces you to sit through video advertisements unless you're willing to pay Microsoft $10 per year to shut them off.
To be sure, we can guess that somewhere up at MS headquarters in Redmond, a meeting took place where something like this was said:
"Hell, we're giving most of these people free versions of a new operating system, we've gotta get something in exchange, and they don't have any right to complain!"
That would be so very, very wrong.
Because while large numbers of users might well consider such trade-offs to be equitable and reasonable, the ethical requirement in the main when dealing with significant issues is simple: You ask permission first.
And asking permission in this context doesn't mean assuming permission, or burying disclosures, or operating on the assumption that simply providing a way to turn something off later is the same as asking permission to turn it on in the first place.
Let's take Microsoft's default commercial use of users' bandwidth to send updates to other MS users in Windows 10, for example.
Imagine if one day you noticed that your home water pressure seemed low. You search around and discover a truck parked outside that is filling its big water tank from your water system, via a hidden hose.
When confronted, the truck owners state that they didn't think they were taking all that much, and if it bothers you they'll stop.
Whether you paid for that water by the gallon or got it all flat rate, I'd wager that most people would react the same way, demanding to know: "Why the blazes didn't you ask permission first?"
To which the likely response would be: "We didn't tell you about it -- we didn't ask -- because we thought you might say no."
This is certainly not to imply that every minor user interface or operations decision must be opt-in only -- but at the very least, issues of significant magnitude must be clearly and openly spelled out in advance, not relegated to "if we're lucky most users won't notice what we did" status.
The latter course is the path to ethics hell, and no amount of free giveaways or slick talk alone can prevent a complete descent into that pit once a firm steps off the ethics precipice.
Can Microsoft still save itself from this fate? Of course, given the will. Much of what they'd need to do immediately could in theory be pushed out to Windows 10 users in a matter of days -- better explanations, asking permission, ethical defaults.
But my gut feeling says that MS is not prepared to make such a major ethical course correction at this time, and that's truly unfortunate.
Hope springs eternal. Perhaps Microsoft will prove my gut feelings on this to be incorrect. Perhaps MS will indeed alter direction and proceed toward the ethical light.
That would be delightful. But don't hold your breath.
I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.
July 31, 2015
Windows 10's New Feature Steals Your Internet Bandwidth
A couple of days ago I discussed a number of privacy and other concerns with Microsoft's new Windows 10, made available as a free upgrade for many existing MS users:
Windows 10: A Potential Privacy Mess, and Worse:
The situation has only been getting worse since then. For example, it's been noted that the Win10 setup sequence is rigged to try fool users into switching to an MS browser, irrespective of their browser settings before they started the upgrade:
Mozilla isnít happy with Microsoft for changing how users change the default web browser in Windows 10:
Pretty bad. But we have even lower to go, as we've seen that by default, Windows 10 actually steals bandwidth from your ISP connection so that Microsoft can use your computer, and your connection, to send MS updates to their other customers.
Huh? Say what?
Yep. It's a devious little feature called Windows Update Delivery Optimization. It's enabled by default. For Enterprise and Education users, it operates over the local LAN. For ordinary Home type users, Microsoft can send their data update goodies to potentially any PC on the global Internet -- from your PC, over your Internet connection. On your dime.
We could get into the pros and cons of local updates being staged between local machines on a LAN as opposed to the outside Internet.
But as soon as MS decided that it's A-OK for them to use my Internet connection to cut down on their bandwidth costs serving their other customers -- without asking me for my specific permission first -- the situation blows into the red zone immediately.
Microsoft makes the predictable excuses about this high-tech thievery.
There's a way you can turn it off. Yeah, buried down deep in the settings, assuming you even know about it in the first place. MS claims they only use your connection when it's "idle" by their definitions. Thanks a bunch.
Oh yes, and (how generous of them!) Microsoft notes that they won't steal bandwidth this way from "metered" connections.
But here's the catch -- in many common configurations you have to manually indicate that a connection shouldn't be used for MS' update delivery scheme, otherwise Microsoft would have no way to know if (for example) you're paying by the gigabyte or have a low bandwidth cap.
Above all, the sheer arrogance of Microsoft to enable this bandwidth theft by default is stunning.
I don't care if they want to move 1K or 1gig to their other happy users, I want to damn well be asked permission first!
Obviously, this general category of peer-to-peer data transfer is used on the Net in other contexts, such as torrents for example -- but that's something you do voluntarily, of your own volition. Comcast uses the bandwidth of many Comcast users to turn modems in people's homes into public Wi-Fi access points. This has been highly controversial, but at least Comcast is typically doing it over modems they supplied, and has claimed that they over-provision the connection speeds to take this into account -- and don't apply that public usage against home users' bandwidth caps.
But Microsoft didn't even bother with such rationalizations. They simply said in essence: "Hey, you've got bandwidth, so we're gonna use it however we please unless you tell us differently. Suckers!"
If you're running Windows 10, you may want to terminate this travesty.
The settings you need are buried down in:
START->Settings->Update & Security->Windows Update->Advanced options, under: Choose how updates are delivered.
It's worth noting at this point that if Google had tried a stupid stunt like this, there would likely already be EU commissioners running through the streets of Brussels hoisting pitchforks and flaming torches, all yelling for Google's blood.
For a while there, it was starting to look like there indeed was a new kind of Microsoft coming into view, one that had evolved beyond the hubris that had so long been Microsoft's single most defining characteristic.
As we can see, any such hopes are now ... Gone with the Win10.
I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.
July 29, 2015
Windows 10: A Potential Privacy Mess, and Worse
Blog Update (31 July 2015): Windows 10's New Feature Steals Your Internet Bandwidth
I had originally been considering accepting Microsoft's offer of a free upgrade from Windows 7 to Windows 10. After all, reports have suggested that it's a much more usable system than Windows 8/8.1 -- but of course in keeping with the "every other MS release of Windows is a dog" history, that's a pretty low bar.
However, it appears that MS has significantly botched their deployment of Windows 10. I suppose we shouldn't be surprised, even though hope springs eternal.
Since there are so many issues involved, and MS is very aggressively pushing this upgrade, I'm going to run through key points here quickly, and reference other sites' pages that can give you more information right now.
But here's my executive summary: You may want to think twice, or three times, or many more times, about whether or not you wish to accept the Windows 10 free upgrade on your existing Windows 7 or 8/8.1 system.
Microsoft is thrusting out this update via a little white Windows icon that you will probably see soon (if you haven't already) on your task bar. There are some users in some situations who will not receive this notification, but most of us will. This icon leads to MS' colorful spiel for why you want to install the free Win10 upgrade.
First things first. It's obvious from my email today that this icon and MS pitch alone are confusing many users. They've never seen anything like this appear before and many think it's a virus or that their system has been otherwise compromised.
In fact, this notification is triggered by a Windows Update that MS slipped into their update stream some time ago, which the vast majority of users probably accepted without realizing what it was.
If you decide you do not wish to upgrade to Win10 now, you may want to get rid of that notification. MS doesn't tell you how (surprise!) and the procedure can range from relatively simple to "a real mess" depending on your situation, but a good discussion of the procedures and provisos is at:
Many users -- especially on somewhat under-powered systems -- may find Win10 to be a painfully slow experience compared with Win7, irrespective of MS' claims.
Worse, some functionalities important to many users are missing. If you use Windows Media Center -- that's gone from Win10. DVD playback is currently problematic.
And here's a biggy. If you don't want Microsoft installing updates automatically -- if you're a user who has chosen to take control of this process up to now -- you probably will hate Win10.
Users with Home versions of Win10 will be required to accept automatic updates, including drivers.
In some environments, this is unacceptable from a support and security standpoint, and reports are already coming in regarding driver related issues.
It's fair to say that in the general case, automatic updates are usually a win from a security and reliability standpoint. But Windows is significantly unique. Because Windows runs on such an enormously wide range of hardware and configurations (compared for example to Chrome OS on Chromebooks) the ways for automatic updates to cause problems for Windows users are dramatically numerous as well. Definitely an important issue to consider.
You may have heard concerns about the sharing of Wi-Fi passwords by Win10. This is largely not a problem in practice, given the details of the implementation.
But Win10 still looks like it could be a privacy quagmire.
You can read an analysis of this here:
As is the case with automatic updates, there is nothing inherently wrong with cloud data syncing, and it can bring significant service and reliability enhancements to users (keeping in mind how infrequently most people properly backup their systems).
But if you're going to avail yourself of such cloud data services, you really need to trust the firm you're dealing with, across the scope of possible data-related aspects.
And to be completely honest about this, I personally simply do not trust Microsoft to the degree that would seem necessary to use the default data sharing settings that Microsoft really, really, really wants you to use -- and of course that the vast majority of users will blithely accept. To put it another way, in this context I trust Microsoft about as far as I could throw a heavy old steel-cased 1980s PC.
Being careful with your data isn't just a Microsoft thing. My views of Microsoft and Google are pretty much diametrically opposed -- I have enormous faith in Google and Googlers doing the right thing with respect to protecting the data I share with them, but even in the case of Google -- with whom I share a great deal of data -- I'm selective about what I do share.
That's just common sense no matter whom you're dealing with, whether individuals, corporations, or other organizations.
The upshot of all this is that while we can all agree that "free" is often good, there's a lot to think about before accepting Microsoft's heavily promoted upgrade to Windows 10, and we all need to approach this decision with our eyes very wide open, indeed.
Be seeing you.
I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.
Blog Update (31 July 2015): Windows 10's New Feature Steals Your Internet Bandwidth
July 27, 2015
What Google's New Changes to Google+ and YouTube REALLY Mean
In a pair of blog posts today, Google announced major changes in the operations of their Google+ (G+) and YouTube services:
There are a number of changes noted, but my executive summary would be that Google is ending the enforced connection of Google+ user profiles to other Google services, notably YouTube.
The popular clickbait analysis appearing on many sites today is that this is the death knell of Google+, proof that it cannot compete with Facebook.
This is incorrect.
Taking the longer view -- and my experience with networked social media reaches back to the dawn of the ARPANET and the earliest email lists -- my own analysis is that the changes are great both for YouTube AND for Google+.
In fact, I believe that these changes indicate Google is actually ahead of the curve regarding the future of social networking, and has already learned lessons that other social media sites -- notably Facebook -- have yet to fully understand, likely to their own long-term peril.
The linkage of G+ profiles to other Google services came during a time when a particular theory -- taken to extremes by Facebook -- was in ascendancy.
Simply put, this premise -- part of a general anti-anonymity concept at the time -- was that forcing users to only post under their ostensibly true identities would result in higher quality posts, with less trolling or other posting abuses.
Experience quickly demonstrated that this was far too simplistic an idea, tangled with immense potential for collateral damage.
Many abusive posters and commenters by and large seem happy to spew their venom under their actual names -- a twisted badge of honor, perhaps.
At the same time, real-name linkage requirements caused all manner of problems for many innocent parties.
There are all sorts of reasons why posting with your own name can be immensely problematic in various cases -- in some situations even dangerous. This is especially true when discussing controversial issues, medical conditions, and all manner of other issues that reflect one way or another on your personal life.
The underlying reasons for this dilemma relate in large part to life contexts.
For example, when you're posting or commenting about highly controversial political matters, you're operating in a very different context than when you're trying to give advice about a problem someone is having with their child.
Similarly, there are no logical reasons why your discussions regarding technical matters must be intertwined with your posts regarding alternative lifestyles or other personal concerns -- unless you voluntarily choose to connect them in this way.
Contextual issues play a major role in the YouTube/Google+ arena as well.
YouTube-side commenters tend to gather around particular YouTube videos and YouTube uploaders/channels of interest. Google+ users who happen to share specific YouTube videos are much more often speaking to an audience that has no continuing interest in those particular videos, but rather are following the varied postings of an individual or other profile on G+ over time.
As someone with nearly 400K G+ followers, I can tell you from experience that the conflation of YouTube and G+ comments wasn't only confusing to many users, but could trigger some nasty situations as well, when YouTube uploaders viewed a G+ share as an "intrusion" into the comments on their channel. I quickly learned to avoid sharing YouTube videos relating to any controversial topics on G+ (especially with my own preamble text that might be critical of a particular video). Otherwise, I could end up spending hours afterwards cleaning up the mass of troll comments -- and on some occasions even threats -- that spewed in from the YouTube side.
But YouTube users' complaints about this were not entirely without merit, since the commenting contexts were intrinsically entirely different. in a perfect world we might hope that this would be a recipe for expanded points of view and teachable moments, but in reality it tended to trigger trolling and conflicts -- and as I noted above, confusion as well.
Confusion is indeed another key point. I love Google+, but it became increasingly difficult for me to convince existing Google users -- or new potential Google users -- to create G+ profiles. Often they were convinced -- based on inaccurate stories they'd heard -- that activating G+ would cause their Gmail accounts to suddenly be exposed, searchable, and tied to their real names. This was never true, but the perception was widespread, likely helped along by various of Google's adversaries.
I'm not a fan of Facebook for a whole bunch of reasons. One of these is that Facebook so often seems to be a place where users feel obligated to be because their friends and families are there -- rather than somewhere they really want to be.
On the other hand, I love Google+. I'm constantly meeting new people from all walks of life around the planet, and am able to engage in a range of discussions with them across a wide scope of topics. Are there some trolls mixed in there as well? Sure. But overall the scope of intelligent and fascinating G+ users utterly swamps the relatively small number of trolls who are comparatively easily dealt with on G+.
I'm convinced that the changes Google announced today will not only make YouTube users happy, but will be great for the organic growth of G+ and other Google services. These changes reduce confusion and bring clarity to these offerings, and that's good for users and good for Google.
Kudos to the Google teams involved!
I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.
July 16, 2015
Meeting Donald Trump
I suddenly realized why I feel like I've met Donald Trump. Turns out that I once knew someone almost exactly like him in every important detail.
I'd forgotten about him until recently. He was very wealthy and constantly bragging about it. He didn't care at all for anyone else's feelings. He constantly made inane assertions without backing them up or explaining them in any way. He had a consistent "my way or the highway" attitude.
He was massively bigoted and seemed not to even realize it. He made up fake facts to fit the moment, changed them with abandon, and endlessly declared that everyone else was stupid.
He was in fact the most obnoxious ten-year-old that my ten-year-old self had ever met.
July 06, 2015
UI Fail: How Our User Interfaces Help to Ruin Lives
A couple of months ago, in Seeking Anecdotes Regarding "Older" Persons' Use of Web Services, I asked for stories and comments regarding experiences that older users have had with modern Web systems, with an emphasis on possible problems and frustrations.
I purposely did not define "older" -- with the result that responses arrived from users (or regarding users) self-identifying as ages ranging from their 30s to well into their 90s (suggesting that "older" is largely a point of view rather than an absolute).
Response rates were much higher than I had anticipated, driven significantly by the gracious endorsement of my survey by Leo Notenboom of ASK LEO!, who went out on a limb and assured his large readership that I was not some loony out to steal their personal information.
Before I began the survey I had some preconceived notions of how the results would appear. Some of these were proven correct, but overall the responses also contained many surprises, often both depressing and tragic in scope.
I had not anticipated the amount of details -- and in particular of highly personal details -- that would arrive in these surveys.
It was immediately obvious that many of these respondents were long frustrated by these issues, and viewed the survey as finally an opportunity to get these concerns off their chests. Much of what they described was heartbreaking.
What was perhaps most surprising was that a deep data dive was not necessary to see the common themes -- they stuck out like a sore thumb from the very first responses onward.
And many of the problems cited are solely our faults, our responsibilities, our shame.
Responses poured in both as first-person reports and as testimonials by family, friends, caregivers, and other persons acting as "tech support" (often remote tech support) for older users.
Any stereotypes about "older" users were quickly quashed.
While some of the users had indeed never had much computer experience, a vast number of responses involved highly skilled, technologically-savvy individuals -- often engineers themselves -- who had helped build the information age but now felt themselves being left behind by Web designers who simply don't seem to care about them at all.
While issues of privacy and security were frequently mentioned in responses, as were matters relating to fundamental service capabilities, issues and problems relating to user interfaces themselves were by far the dominant theme.
Some of these were obvious.
There is enormous, widespread frustration with the trend toward low-contrast interfaces and fonts, gray fonts on gray backgrounds and all the rest. Pretty, but unreadable to many with aging eyes (and keep in mind, visual acuity usually begins to drop by the time we've started our 20s).
Many respondents noted that screen magnifiers can't help in such situations -- they just end up with a big low-contrast blob rather than a small low-contrast blob.
But then we really get into the deeper nitty-gritty of UI concerns. It's a long and painful list.
Hidden menus. Obscure interface elements (e.g., tiny upside-down arrows). Interface and menu elements that only appear if you've moused over a particular location on the display. Interface elements that are so small or ephemeral that they can be a challenge to click even if you still have the motor skills of youth. The list goes on and on.
And beyond this, there is even more frustration with what's viewed as undocumented and unnecessary changes in interfaces.
For a user with fading memory (another attribute that begins to surface relatively early in life) the sudden change of an icon from a wrench to a gear, or a change in a commonly used icon's position, can trigger such frustration that users who could most benefit from these systems -- especially for basic communications -- become embarrassed and, not wanting to ask for help, give up and withdraw back into deadly isolation.
These were by far the most repeated themes in responses -- concerns regarding the rapid and seemingly arbitrary changes of hard to find, see, and click UI elements and associated menu/command functionalities.
The frustration of caregivers in these contexts was palpable.
They'd teach an older user how to use a key service like Web-based mail to communicate with their loved ones, only to discover that a sudden UI change caused them to give up in frustration and not want to try again. When the caregiver isn't local the situation is even worse. While remote access software has proven a great boon in such situations, they're often too complex for the user to set up or fix by themselves when something goes wrong, remaining cut off until the caregiver is back in their physical presence.
I could go on, but you get the idea. With subtle variations in details, I was seeing the same sad stories over and over again as I poured through the survey responses.
We have failed a user population that not only needs our services but for whom our communication services -- including social media -- can make sometimes critical improvements in their lives, especially in helping them not withdraw into isolated oblivion.
We could argue about the motivations, history, and policies that brought us to this current state of affairs, but I'm much more interested in solutions.
So I have a modest suggestion.
I would like to see major Web services commit themselves to the proposition of providing optional and easily enabled "basic interfaces" to their main services, alongside the existing "primary" interfaces.
We're not talking "dumbed-down" interfaces here. We're talking about UIs that feature clear menus, obvious and easy to click icons, and most importantly, that would be supported for important functionalities for significantly longer periods of time than the rapidly evolving primary interfaces themselves.
This is most assuredly not a question of halting innovation, but rather of respecting the differing needs of different users at various stages of their lives.
And frankly, I suspect that "basic" interfaces as described would be widely welcomed by significant numbers of users irrespective of their ages.
I have some detailed thoughts on how such basic interfaces might be structured and deployed vis-a-vis primary interfaces, but I won't bore you with that here.
What's important right now is that we commit ourselves to the proposition that we need to better serve all users, and that our current largely one-size-fits-all user interface methodologies are actively working against this crucial concept.
Thankfully, accomplishing this doesn't require any artificial intelligence breakthroughs or rocket science. It requires only that we agree that these users are important and that we allocate reasonable resources toward these solutions.
As an industry, we seem to be great at coming up with high-end services to better serve the young and elite.
It's time that we put the same efforts into better serving everyone else as well.
June 30, 2015
Terrorism, the Internet, and Google
For those of us involved in the early days of the Internet's creation and growth, it would at the time have seemed inconceivable that decades later the topic of this post would need to be typed. I think it's fair to say that none of us -- certainly not yours truly -- ever imagined that the fruits of our labors would one day become a crucial tool for terrorists.
That day has nonetheless arrived, and it thrusts us directly into what arguably is the single most critical issue facing the Internet and Web today -- what to do about the commandeering of social media by the likes of ISIL (aka ISIS, or IS, or Daesh) and other terrorist groups.
As we've discussed in the past, governments around the world are already using the highly visible Internet presence of these criminal terrorist organizations as excuses to call for broad Internet censorship powers, and for "backdoors" into encryption systems that would be devastating for both privacy and security worldwide.
Yet it's the horrific terrorist "recruitment" videos that have quite understandably received the bulk of public attention, and they create a complex dilemma for advocates of free speech such as myself.
We know that free speech is not without limits -- the "yelling fire in a crowded theater" case being the canonical example.
How and where should we draw the lines on the Web?
Let's begin with a fundamental fact that is all too often ignored or misrepresented. When a firm like Google -- or any other organization outside of government -- decides it does not want to host or encourage any given type of material, this is not censorship.
Just as book publishers are not obligated to distribute every manuscript offered to them, and TV networks need not buy every series pilot that comes their way, nongovernmental organizations and firms are free to determine their own editorial standards and Terms of Service.
They need not participate in the dissemination of sexually-oriented videos, kitten abuse compilations ... or beheading videos produced by medieval, religious fanatic monsters.
Firms are free to determine for themselves the limits of what their content and services will be.
Governments -- on the other hand -- can censor. That is, they determine what private parties, firms, and other organizations are (at least in theory) permitted to produce, disseminate, or hear and view. And governments can back up these censorship orders with both criminal and civil penalties. They can throw you in shackles into a dark cell for violating their orders. Last time I checked, Google and other Internet firms didn't have such capabilities.
So when Google's chief legal officer David Drummond, and policy director Victoria Grand recently spoke of the need to fight back against ISIL and other terrorist groups' propaganda and recruiting use of YouTube in particular, and urged other firms to take similar social media stances, I was very proud of their positions and those of Google's broader policy team.
Even for a vocal free speech advocate such as myself, I cannot ethically condone the use of powerful platforms like YouTube as genocide-promoting social media channels by technologically skilled savages.
This is not to suggest that drawing the lines in such cases is anything but vastly complicated.
I have some significant insight into this thanks to my recent consulting to Google, and I can state unequivocally that the amount of emotionally draining, Solomonic soul-searching judgments that go into decisions regarding abusive content removals at Google is absolutely awe-inspiring. The motivated and dedicated individuals and teams involved deserve our unending respect.
Even seemingly obvious cases -- like those involving ISIL -- turn out to be decidedly difficult when you dig into the details.
Some governments would love to try cleanse the entire Net of all references to these terror groups via broad censorship orders.
That would be doomed to failure of course, and in fact attempts to utterly banish information about the utter brutality of these beasts would not at all serve in making sure the world clearly understands the depth of horror with which we're dealing.
Yet there is vanishingly little true probative value -- and there is vast salacious propagandistic recruitment power -- in the display of actual beheadings conducted by these groups, and Google is correct to ban these as they have.
A particularly disquieting corollary to this situation is the manner in which some of my colleagues seem unwilling or unable to appreciate the complexities and nuances inherent in these situations.
Many of them have expressed anger at Google for drawing these content lines, arguing that YouTube users should be permitted to post whatever they want whenever they want, no matter the content -- even if the videos serve purposely and directly as vile terrorist recruiting instruments.
Such arguments essentially attempt to equate all content and all speech as equal -- an appealing academic concept perhaps, but a devastatingly dangerous construct in the real world of today given the power and reach of modern social media.
To be crystal clear about this, I'll emphasize again that decisions about content availability and removal in these contexts are complex, difficult, and not to be approached cavalierly.
But I'm convinced that Google is doing this right, and the Web at large would do well to look toward Google as an example of best ethical practices in managing this nightmarish situation in the best interests of the global community at large.
June 22, 2015
DOJ vs. Google: How Google Fights on Behalf of Its Users
One of the oft-repeated Big Lies -- still bandied about by Google haters today -- is the false claim that Google enthusiastically turns over user data to government agencies. This fallacy perhaps reached its zenith a few years ago, when misleading PowerPoint slides from Edward Snowden's stolen NSA documents cache were touted by various commercial parties (with whom he had entrusted the data), in a misleading, out-of-context manner, designed for maximum clickbait potential. The slides were publicized by these parties with glaring headlines suggesting that Google permitted NSA to freely rummage around through Google data centers, grabbing goodies like a kid set loose in a candy store.
Google immediately and forcefully denied these claims, and for anyone familiar with the internal structure and dialogues inside Google, these allegations were ludicrous on their face. (Full disclosure: While I have consulted to Google in the relatively recent past, I am not currently doing so.)
Even an attempt to enable such access for NSA or any other outside party would have by necessity involved so many engineers and other Google employees as to make impossible any ability to keep such an effort secret. And once known, there would have been very public, mass resignations of Googlers -- for such an intrusion would strike directly at the heart of Google philosophy, and the mere suggestion of such a travesty would be utter anathema to Google engineers, policy directors, lawyers, and pretty much everyone else at the firm.
Obviously, Google must obey valid laws, but that doesn't mean they're a pushover -- exactly the opposite.
While some companies have long had a "nod and wink" relationship with law enforcement and other parts of government -- willingly turning over user data at mere requests without even attempting to require warrants or subpoenas, it's widely known that Google has long pushed back -- sometimes though multiple layers of courts and legal processes -- against data requests from government that are not accompanied by valid court orders or that Google views as being overly broad, intrusive, or otherwise inappropriate.
Over the last few days the public has gained an unusually detailed insight into how hard Google will fight to protect its users against government overreaching, even when this involves only a single user's data.
The case reaches back to the beginning of 2011, when the U.S. Department of Justice tried to force Google to turn over more than a year's worth of metadata for a user affiliated with WikiLeaks. While these demands did not include the content of emails, they did include records of this party's email correspondents, and IP addresses he had used to login to his Gmail account.
Notably, DOJ didn't even seek a search warrant. They wanted Google to turn over the data based on the lesser "reasonable grounds" standard rather than the "probable cause" standard of a search warrant itself. And most ominously, DOJ wanted a gag order to prevent Google from informing this party that any of this was going on, which would make it impossible for him to muster any kind of legal defense.
I'm no fan of WikiLeaks. While they've done some public good, they also behave as mass data dumpers, making public various gigantic troves of usually stolen data, without even taking basic steps to protect innocent persons who through no fault of their own are put at risk via these raw data dumps. WikiLeaks' irresponsible behavior in this regard cannot be justified.
But that lack of responsibility doesn't affect the analysis of the Gmail case under discussion here. That user deserved the same protection from DOJ overreaching as would any other user.
The battle between Google and DOJ waged for several months, generating a relatively enormous pile of associated filings from both sides. Ultimately, Google lost the case and their appeal.
This was still back in 2011. The gag order continued and outside knowledge of the case was buried by government orders until April of 2015 -- this year! -- when DOJ agreed to unseal some of the court records -- though haphazardly (and in some cases rather hilariously) redacted. These were finally turned over to the targeted Gmail user in mid-May -- triggering his public amazement at the depth and likely expense of Google fighting so voraciously on his behalf.
Why did DOJ play such hardball in this case, particularly involving the gag order? There's evidence in the (now public) documents that the government wanted to avoid negative publicity of the sort they assert occurred with an earlier case involving Twitter, and DOJ was willing to pull out all the stops to prevent Google from even notifying the user of the government's actions.
You don't need to take my word on any of this. If you have some time on your hands, the over 300 pages of related filings are now available for your direct inspection.
So the next time someone tries to make the false claim that Google doesn't fight for its users, you can print out that pile of pages and plop it down right in front of them. Or save the trees and just send them the URL.
Either way, the truth is in the reading.
Be seeing you.
June 19, 2015
Why Google Must Stand Firm: Putin Pushes the Dangerous "Right To Be Forgotten" Further Into Lunatic Land
A week ago, in my latest discussion of the nightmarish EU "Right To Be Forgotten" (RTBF), titled Just Say "NON!" - France Demands Right of Global Google Censorship, I once again emphasized the "camel's nose under the tent" aspect of RTBF, and how we should have every expectation that Russia, China, and other repressive regimes would make similar demands and attempt to have them implemented as global censorship by Google.
Well, that didn't take very long at all.
Indeed, Russia (and that means Czar Putin) is on the cusp of a vast RTBF law of their own, that makes the awful EU version look like a picnic by comparison.
In the proposed "Soviet" version of RTBF, complainants wouldn't even have to specify links of concern, just vague topic areas. And unlike in Europe, even public figures could demand that Google and other search engine results be whitewashed to remove unflattering or revealing references.
Meanwhile, word comes from France that they might want Google to individually track French users wherever they go in the world so that they can be specifically subjected to EU RTBF censorship anywhere and everywhere. "Liberty, Equality, Fraternity?" Hogwash.
If this wasn't obvious before, it should be obvious to everyone with half a brain by now -- the power to censor Google and other search engines placed into the hands of governments -- any governments regardless of political orientations -- is the freedom of speech destroying equivalent of handing nuclear weapons to terrorists.
Throughout recorded history, governments have wanted to control information, and the technology at hand provides them with a relatively easy means of finally fulfilling that frightful fetish.
No matter how much Google and others might try negotiate or compromise in good faith with governments on this topic, the latter will ultimately demand ever more censorship and ever more direct control over search results. There will simply be no end to it.
It is absolutely crucial to the future of free speech and broader civil liberties around the world that Google stand firm against the encroachment of ever more damaging and berserk RTBF laws and demands. This is true even if it requires significant changes in some underlying business models. Because ultimately if RTBF is permitted to continue on its current course of escalating censorship, those business models are going to be significantly decimated in much more damaging ways -- damaging to Google itself and incredibly destructive to the global community that depends on Google to do the right thing in difficult situations. Google must obey the law, but it also has considerable latitude in how and where it chooses to operate -- latitude that can be very relevant to the RTBF and other censorship issues now at hand.
We're depending on Google to set an example against the pages of useless, lowest common denominator search results that will be the inevitable result of RTBF laws continuing on their present course -- like a tidal wave leaving nothing but rubble in its wake.
We're standing at the crossroads of perhaps the most critical information freedom juncture in history since the invention and spread of the printing press.
There is no room for error.
Do this right, Google. I know you can.
June 17, 2015
Falling Into the Encryption Trap
This is a difficult discussion for me. It borders on embarrassing, because I'm forced to admit that I was unable to foresee some of the ramifications of encryption-related polices I've been promoting for many years.
I could make the excuse that I did not anticipate an onrush of largely hyperbolic paranoia as has been triggered post-Snowden, but I should have realized that some sort of similar event would be adequate to trigger similar issues.
My concern started taking shape about a month and a half ago when discussions were bouncing around the Net regarding Mozilla's supposed intention to (at some point in the future) refuse to allow Firefox browser connections to sites not running SSL/TLS, or at least restricting "features" available to them. Mozilla's actual stance on this is currently not entirely clear, but it appears that their longer-term plans at least are moving in this direction.
Two days ago when I posted When Google Thinks They're Your Mommy, regarding the Chrome browser's refusal to connect to a major corporate site that other browsers would connect to, I triggered a mass of users sending me similar stories about "encryption enforcement" complications -- and not just about Chrome.
There were people complaining about Chrome blocking sites, Firefox blocking sites, new versions of browsers preventing users from accessing network-connected home devices -- that were difficult or impossible to replace and only ran on local networks. On and on. I had let loose an unexpected avalanche, about an issue I incorrectly thought was only now beginning to gradually affect larger numbers of users.
And last night I had a nightmare.
I saw two parents desperately trying to access a misconfigured medical-related site for their sick child, being blocked by Chrome "for their own protection" -- and then trying to install another browser in panic after being informed by a Google help page that Chrome wouldn't help them, and that using another browser was their only alternative.
This is what comes of reading my own blog posts sometimes -- waking up in cold sweat from a very dark dream.
What's happened of course is that post-Snowden there's been a mantra that we're all being spied on all the time, and not only should all Internet connections be encrypted, but users should not be permitted -- no matter what the situation -- to access sites that are not encrypted to the standard that the crypto-gurus feel is adequate, even when the situation is triggered by temporary misconfiguration rather than purposeful configuration decisions at a site.
The argument is that man-in-the-middle attacks are so powerful and so pervasive (the former can certainly be true, the latter is definitely arguable) that even someone viewing kitten videos must use encryption -- if for no other reason than to protect them from some evil entity injecting an exploit into their weak connection.
Obviously if you're Google and continuously transferring gazookabytes of user data between datacenters, you're a big target and you want those circuits to be as rigorously encrypted as is practicable.
The reality though is that the overwhelmingly vast majority of user system exploits aren't based on subversion of the connections at all, but rather on endpoint attacks -- usually tied to phishing and other "social engineering" techniques when available multiple factor authentication systems have not been deployed.
Like I said, I've spent many years promoting the concept of universal, opportunistic Internet encryption.
But in some of the attitudes I see being expressed now about "forced" encryption regimes -- even browsers blocking out fully-informed users who would choose to forgo secure connections in critical situations -- there's a sense of what I might call "crypto-fascism" of a kind.
And that worries me. That's the stuff of nightmares.
It's one thing for a site to specifically and clearly indicate that it will only accept secure connections of a particular class and quality, proclaiming that it feels such restrictions are absolutely necessary in their context.
It's something else entirely though for a browser to unilaterally declare a site's security to be unacceptably weak (perhaps by choice or often by misconfiguration -- both of which we can agree need to be fixed) to the extent that the browser absolutely refuses to allow the user to connect, regardless of how crucial the situation and irrespective of the fully-informed expressed will of the user to connect in any case.
Some encryption and Web standards experts might assert that this is simply a situation where a rather technically fascistic attitude is necessary to protect users overall, even if individual users in some circumstances might be horribly injured in the process. I've already had someone quote Spock to me on this one ("The needs of the many outweigh the needs of the few.")
Leaving aside for the moment that "the few" at Internet scale could easily still be many millions of warm bodies, I don't buy Spock's supposed logic in the context under discussion here.
Yes, we want to encourage encryption -- strong encryption -- on the Net whenever possible and practicable. Yes, we want to pressure sites to fix misconfigured servers and not purposely use weak crypto.
But NO, we must not permit technologists (including me) to deploy Web browsers (that together represent a primary means of accessing the Internet), that on a "security policy" basis alone prevent users from accessing legal sites that are not specifically configured to always require strongly encrypted connections, when those users are informed of the risks and have specifically chosen to proceed.
Anything less is arrogantly treating all users like children incapable of taking the responsibility for their own decisions.
And that would be a terrible precedent indeed for the future of the Internet.
June 15, 2015
When Google Thinks They're Your Mommy
Major tech companies are in an interesting position these days. They provide and (one way or another) control most of our communications pipelines, and (quite reasonably) usually wish to encourage maximally effective security and privacy regimes.
Certainly Google falls into this category, with world-class privacy and security teams that have been my privilege to work with in the past.
But what happens when a firm decides that no matter what the user wants to do, the company will simply not permit it, because they feel so strongly that they know better than the user in a given circumstance?
That's what happened to me this morning, and it's a matter of growing concern.
I had an important transaction that I needed to conduct quickly on a major corporate website. I access this site several times a week, and I always use the excellent Google Chrome browser.
But this time, I couldn't log in. Google refused to permit me to log in to this third-party site, which I needed to access immediately.
What was going on?
Google was suddenly unhappy about the strength of the SSL/TLS connection being used by this site, and refused to permit access.
Presumably there's a configuration issue at that site that really should be fixed, but going down the rathole of trying to explain that to their customer support agents would likely be a twisted exercise that would take hours, with no guarantee that a change would be quickly forthcoming in any case.
Yep, that site needs repair, but I needed to access it irrespective of that.
Unlike most security certificate warnings from Chrome (and other web browsers) this one had no apparent means of user bypass.
This does not appear to be a bug in Chrome, because the associated "Learn more" page essentially said, "We won't help you. Go try another browser if you want. Good luck, guy!"
If there was any way to change the browser configuration or otherwise bypass this apparently absolute block, I couldn't quickly find it, and I know my way around Chrome pretty damned well.
Because I keep multiple browsers on-hand and current, and have my login credentials always available (not tied to a single browser), I was able to move to another browser and complete the important transaction.
However, if this had happened on a smartphone behaving this way with only one browser, or I was using a desktop system that only ran that one browser, I would have been up the creek without significant work to try get another browser going -- assuming I was in a position to do so and that other browsers didn't ultimately move toward exhibiting this same policy.
We can certainly agree that weak (or even entirely absent) SSL/TLS connections are to be avoided. In combination with an active "man-in-the-middle attack" or other spying, login or other important credentials and data could be vulnerable.
Of course the reality for most of us is that the risks to our important data (financial and otherwise) come from a wide variety of online and offline sources, with SSL/TLS connection compromise being pretty much down near the bottom of the probability list in most cases.
But for the sake of the argument, let's assume that a given connection is using weak or even completely broken crypto, and that there is an evil figure monitoring that particular connection at that particular time.
Even then, there will be situations where getting through to a particular site can be crucial -- more important even than compromised login credentials that can be later updated, more important even than compromised financial data.
Nowadays there are situations where immediate access to a site for information or transactions can be absolutely life critical, overriding individual security concerns.
And that is a decision for the individual user. It's not a decision for Google or any other firm to make for a user.
Google is not my Mommy.
By all means, sternly and clearly warn users of the risks involved in proceeding. Show photos of vampires about to strike, angry-looking kittens, and animations of Godzilla blocking my path.
Feel free to force the user to jump through multiple acknowledgment hoops (clear ones, not in fine print or otherwise hidden or obscure) before letting them complete the connection -- sternly emphasize how much you recommend against this course of action!
But in the final analysis, get the blazes out of the way and let a consenting adult make their own fully-informed choice about the sites they need to access, without Google (or other firms) treating them like a child or imbecile to be locked in their bedroom without supper.
Open means open, and it is not the appropriate role of Google or any other enterprise to impose its view of security to the extent of blocking a user from accessing a legal site when that user feels that they absolutely must do so.
If you've informed users of the risks, and they've acknowledged these and choose to proceed despite your sage advice, then that's their decision and responsibility, not yours.
And that's the truth.
June 12, 2015
Just Say "NON!" - France Demands Right of Global Google Censorship
I've been waiting for this, much the way one waits for a violent case of food poisoning.
France is now officially demanding that Google expand the hideous EU "Right To Be Forgotten" (RTBF) to Google.com worldwide, instead of just applying it to the appropriate localized (e.g. France) version of Google.
And here's my official response as a concerned individual:
To hell with this.
That's nowhere near as strong a comment as I'd really like to make, but this is a general readership blog and I choose to avoid the use of the really appropriate invectives here. But man, I could justifiably pile on enough epithets here to melt your screens before your eyes.
A key reason why I've been warning all along about the disastrous nature of RTBF is precisely this "camel's nose under the tent" situation. Giving in to localized censorship demands from the EU and/or member countries was bound to have this result.
What's worse, if France or other EU countries get away with this attempt to impose their own censorship standards onto the entire planet, we can be sure that government leaders around the world will quickly follow suit, demanding that Google globally remove search results that are politically "inconvenient" -- or religiously "blasphemous" -- or, well, you get the idea. It's a virtually bottomless cesspool of evil censorship opportunities.
It's bad enough when the ever more censorship and surveillance loving Western leaders have this kind of power. But how about Vladimir Putin, or China's rulers, or Iran's Supreme Leader as GLOBAL censors?
It wouldn't be long before it would seem that every search on any controversial topic might as well be replaced with a "404 Not found" page -- a rush to lowest common denominator mediocrity, purged of any and all information that government leaders, politicians, or bureaucrats would prefer people not be able to find and see.
I've written and said so much about RTBF for years that it feels like an endless case of "Groundhog Day" at this point -- e.g. early on in The "Right to Be Forgotten": A Threat We Dare Not Forget (2/2012), and most recently in a one hour live RTBF hangout video discussion (about a month ago).
And I'm certainly not alone in these concerns. Yet we continue to be sucked down this rathole, now with governments using overblown security concerns as an excuse to try justify even broader search engine censorship across a vast range of topics.
So far, Google has resisted the concept of RTBF being applied globally. I not only applaud their stance on this, but I strongly urge them to stand utterly firm on this issue.
RTBF even in localized forms is bad, but if countries had the ability to impose their individual censorship regimes onto the entire globe's population, we'd be -- with absolutely no exaggeration -- talking about an existential threat not just to "free speech" but to fundamental communications and information rights as well.
This cannot be tolerated.
Non! Nein! Nahin! Nyet!
Just say NO!
June 08, 2015
Cops Still Don't Realize That YouTube Has Changed Their World Forever
Sometimes the world changes around us so rapidly that there's a sense of chaos, until understanding and what passes for equilibrium have been reestablished.
Such is the case with videos of law enforcement's interactions with the public, especially videos captured by members of the public and then subsequently made available publicly on YouTube and via other Internet platforms.
One thing is abundantly obvious -- many police departments and individual police officers still do not understand that the rules of the "game" under which they operate have been irrevocably altered.
Police videos generally fall into two broad categories -- videos shot by police themselves, and videos shot by members of the public.
The former include dashcams and increasingly body cameras. The latter is potentially the domain of pretty much everyone with a smartphone these days.
Generally speaking, police officers have been warming up to video that they themselves photograph -- especially when they can control when the cameras are running -- which opens up a rather significant can of worms indeed.
And while police departments would love to hold onto body camera footage (often taken inside people's homes at times of their greatest distress) for future investigative purposes, they must also face the complexities and expense of redacting videos for public record requests, to quite appropriately protect innocent parties from public abuse and exploitation.
Overall though, police are typically pretty eager to trot out the videos when they appear to support officers' accounts of events -- just today a video was released that appears to show that a police shooting of a terrorism-related suspect was seemingly much more aligned with officer statements than with some completely contradictory "witness" statements being promulgated by cable news.
The situation tends to be very different with videos of police confrontations photographed by the public, often rapidly posted to YouTube.
We seem to be bombarded with an almost daily menu of "cops behaving badly" videos that cover the entire range from somewhat comical, through utterly bizarre, to downright horrifying.
To be sure, these videos are self-selected by their uploaders, and there isn't much attention paid to the majority of cops who are well behaved -- and so aren't the subject of many videos overall. Naturally enough, it's the bad eggs and nasty confrontations that are going to get the attention when it comes to videos.
But much of what we see in these negative cases is indeed utterly chilling. Cops shooting down unarmed persons, groups of officers beating already compliant suspects to a pulp -- all manner of situations that previously would have had no visual or audio record for review.
A new example currently receiving a lot of attention over this last weekend is of a Texas cop running around like a crazed dog, pointing his gun at half-naked teenagers from a pool party, and even tackling an obviously unarmed girl in a bikini to the ground. He appeared to be utterly out of control and his fellow officers seemed to just stand there watching in amusement. The racial aspects of the situation -- the cop was white, most of his targets black -- are difficult to ignore.
While apparently nobody was seriously hurt in this case, it's all the more upsetting for what could easily have happened if that gun had fired or that girl's neck had been broken.
None of these videos, whether photographed by police or the public, ever tell the whole story. Many police departments allow officers to control when their cameras are running. Videos photographed by the public typically don't show events leading up to a confrontation that attracted attention. Individual cameras only show one point of view. And so on. Such videos are clearly not definitive.
It's also clear that many cops still haven't gotten the message made explicit by numerous courts, that the public has the right to capture such videos. We're still seeing officers snatching phones and cameras from people's hands, sometimes smashing the devices, sometimes smashing the face of the photographer, arresting them, pepper spraying them and the like, even though no interference with the police action was occurring. (In an attempt to keep the public's cameras away, some police departments are trying to establish likely illegal "exclusionary zones" around their officers, to discourage public photography of those officers in action).
We're also seeing attempts to remove many of these videos from public view -- though tracing who was responsible for such actions can be very difficult. For example, I shared a copy of the "pool party cop" YouTube video link yesterday on Google+. By this morning the video was gone, marked as spam, most likely through a malicious false takedown submission.
Of course, this was a failed attempt at information control -- many other copies of that video are now widely available on YouTube and other venues.
Perhaps what might seem most odd of all is how we see cops behave so badly even when they're aware that they're being photographed.
This makes sense though when you consider that officers are used to having their accounts of events being rarely questioned and almost always accepted.
It just hasn't fully penetrated yet that there are now recording eyes on the scene, and that an officer's own rendition of events is but one of the variables of the equation, and often by no means the most reliable of these.
Ubiquitous video cameras, smartphones, and YouTube have together fundamentally changed key aspects of law enforcement operations, and the ways in which both courts and the public will view them going forward.
There is no turning back. There is no possible return to the previous era of officer reports being accepted with nary a question or concern.
Police departments will ultimately learn to live with this.
And for the public at large, this is very good news indeed.
June 04, 2015
Governments of the World Agree: Encryption Must Die!
Finally! There's something that apparently virtually all governments around the world can actually agree upon. Unfortunately, it's on par conceptually with handing out hydrogen bombs as lottery prizes.
If the drumbeat isn't actually coordinated, it might as well be. Around the world, in testimony before national legislatures and in countless interviews with media, government officials and their surrogates are proclaiming the immediate need to "do something" about encryption that law enforcement and other government agencies can't read on demand.
Here in the U.S., it's a nearly constant harangue over on FOX News (nightmarishly, where most Americans apparently get their "news" these days). On CNN, it's almost as pervasive (though anti-crypto tirades on CNN must share space with primetime reruns of a globetrotting celebrity chef and crime "reality" shows).
It's much the same if you survey media around the world. The names and officials vary, but the message is the same -- it's not just terrorism that's the enemy, it's encryption itself.
That argument is a direct corollary to governments' decidedly mixed feelings about social media on the Internet. On one hand, they're ecstatic over the ability to monitor the public postings of criminal organizations like ISIL (or ISIS, or Islamic State, or Daesh -- just different labels for the same fanatical lunatics) that sprung forth from the disastrously misguided policies of Bush 1 and Bush 2 era right-wing neocons -- who not only set the stage for the resurrection of long-suppressed religious rivalries, but ultimately provided them with billions of dollars worth of U.S. weaponry as well. Great job there, guys.
Since it's also the typical role of governments to conflate and confuse issues whenever possible for political advantage, when we dig deeper into their views on social media and encryption we really go down the rabbit hole.
While governments love their theoretical ability to track pretty much every looney who posts publicly on Twitter or Facebook or Google+, governments simultaneously bemoan the fact that it's possible for uncontrolled communications -- especially international communications -- to take place at all in these contexts.
In particular, it's the ability of radical nutcases overseas to recruit ignorant (especially so-called "lone wolf") nutcases in other countries that is said to be of especial concern, notably when these communications suddenly "go dark" off the public threads and into private, securely encrypted channels.
"Go dark" -- by the way -- is now the government code phrase for crypto they can't read on demand. Dark threads, dark sites, dark links. You get the idea.
One would be remiss to not admit that these radical recruiting efforts are of significant concern.
But where governments' analysis breaks down massively is with the direction of their proposed solutions, which aren't aimed at addressing the root causes of fanatical religious terrorism, but rather appear almost entirely based on preventing secure communications -- for anybody! -- in the first place.
Naturally they don't phrase this goal in quite those words. Rather, they continue to push (to blankly nodding politicians, journalists, and cable anchors) the tired and utterly discredited concept of "key escrow" cryptography, where governments would have "backdoor" keys to unlock encrypted communications, supposedly only when absolutely necessary and with due legal process.
Rewind 20 years or so and it's like "Groundhog Day" all over again, back in the early to mid 90s when NSA was pushing their "Clipper Chip" hardware concept for key escrowed encryption, an idea that was mercilessly buried in relatively short order.
But like a vampire entombed without appropriate rituals, the old key escrow concepts have returned to the land of the living, all the uglier and more dangerous after their decades festering in the backrooms of governments.
The hardware Clipper concept dates to a time well before the founding of Twitter or Facebook, and a few years before Google's arrival. Apple existed back then, but centralized social media as we know it today wasn't yet even really a glimmer in anyone's eye.
While governments generally seem to realize that stopping all crypto that they can't access on demand is not practical, they also realize that the big social media platforms (of which I've named only a few) -- where most users do most of their social communicating -- are the obvious targets for legislative, political, and other pressures.
And this is why we see governments subtly (and often, not so subtly) demonizing these firms as being uncooperative or somehow uncaring about fighting evil, about fighting crime, about fighting terrorism. How dare they -- authorities repeat as a mantra -- implement encryption systems that governments cannot access at the click of a mouse, or sometimes access at all under any conditions.
Well, welcome to the 21st century, because the encryption genie isn't going back into his bottle, no matter how hard you push.
Strong crypto is critical to our communications, to our infrastructures, to our economies, and increasingly to many other aspects of our lives.
Strong crypto is simply not possible -- let's say that once more with feeling -- not possible, given key escrow or other government backdoors designed into these systems. There is no practical or even theoretically accepted means for including such mechanisms without fatally weakening the entire associated encryption ecosystem, and opening it up to all manner of unauthorized access via hacking and various subversions of the key escrow process.
But governments just don't seem willing to accept the science and reality of this, and keep pushing the key escrow meme. It's like the old joke about the would-be astronaut who wanted to travel to the sun, and when reminded that he'd burn up, replied that it wasn't a problem, because he'd go at night. Right.
Notably, just as we had governments who ignored realistic advice and unleashed the monsters of religious fanatical terrorism, we now have many of the same governments on the cusp of trying to hobble, undermine, and decimate the strong encryption systems that are so very vital.
There's every reason to believe that we'd experience a similarly disastrous outcome in the encryption context as well, especially if social media firms were required to deploy only weak crypto -- putting the vast populations of innocent users at risk -- while driving the bad guys even further underground and out of view.
If we don't vigorously fight back against government efforts to weaken encryption, we're all going to be badly burned.