February 27, 2015

Google's Gutsy Reversal: Explicit Content Blogger Ban Rescinded

Just a few days ago, in With Sudden Blogger Change, Google Drags Their Trust Problem Back into the Spotlight, I expressed strong concerns over Google's decision to both retroactively and proactively ban most "explicit content" from their Blogger platform, with only a month's warning and no real explanation offered at the time for such a dramatic policy change.

The next time someone tries to tell you that Google doesn't listen to user and other public concerns, you can prove that person wrong by pointing them at this story, because Google has now announced that they are completely rescinding that new policy.

It takes some serious fortitude to publicly admit when you've made a policy mistake. What's more, Google has taken the gutsy approach and has reversed the previous decision entirely. It would have been far easier -- given the real pressures that exist around explicit content -- to have left the new policy in place with an explanation and significantly extended deadlines.

But Google has instead chosen to reaffirm the freedom of expression foundation of Blogger that has helped make it so popular and useful for many years.

In so doing, they have made the correct decision for Google, for users, and for the principles of free speech and free expression that are currently under so much political and other duress.

Thanks Google.

--Lauren--

Posted by Lauren at 12:51 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 24, 2015

With Sudden Blogger Change, Google Drags Their Trust Problem Back into the Spotlight

Blog Update (February 27, 2015): Google's Gutsy Reversal: Explicit Content Blogger Ban Rescinded



I'm not a big fan of porn. I'd be lying if I claimed to never glance at it -- hell, I'm a human male, no excuses about that -- but explicit materials tend not to be anywhere near the top of my personal Web browsing catalog.

It's undeniable though that due to its highly controversial and widely variable definition, restrictions on "explicit" imagery in particular have long been at the forefront of freedom of speech issues and concerns, even among individual free speech advocates who may personally detest such content.

The reason is pretty obvious -- how governments and corporations handle these "edge" materials (that may often be viewed as "low hanging fruit") can be harbingers of how they will deal with other sensitive and controversial matters that fall into free speech realms, including access to historical information already published (the target of the EU's nightmarish "Right To Be Forgotten"), political information and criticisms, and ... well, it's a long list.

Abrupt changes in such policies -- particularly when announced without explanations -- tend to be particularly eyebrow-raising and of special concern.

So it is with considerable puzzlement and consternation that I yesterday saw Google's quite surprising announcement that they were banning most explicit imagery from their very popular and long-standing Blogger platform, and indeed with only 30 days notice and without any explanation whatsoever for this dramatic reversal in policy.

There are some limited and rather nebulous exceptions ("educational value" and the like -- sure to be the subject of heated disagreement), and users can download their existing sites to try move elsewhere, but the overall sense of the change is clear enough. Google is trying to kick such sites -- many of them essentially personal, alternative lifestyle, non-commercial public "diaries" of long-standing and with vast numbers of incoming links built up over many years -- out the Google door as rapidly as possible.

And let there be no mistake about it -- this is a sudden, dramatic, and virtually 180 degree change. Blogger has long explicitly celebrated freedom of expression, with "adult content" sites including an access warning splash page so nobody would be exposed to such materials accidentally.

That Google is within its rights to change this policy in the manner they have announced is totally true and utterly unassailable.

But the manner of their doing this drags back into focus longstanding concerns about how Google treats its users in particular contexts, particularly those users who might be considered to fall outside of "mainstream" society in any number of ways.

Google has indeed made some very significant positive strides in this area. Account recovery systems have been improved so that innocent (but sometimes forgetful) users are less likely to be locked out of their accounts and associated Google services. Google Takeout permits users to download their data from a wide variety of Google services to save locally or store elsewhere -- if they do this before the associated Google account is closed. (However, the "who's data is this anyway?" question still looms large in cases of forcible account closures due to various kinds of Terms of Service violations, when users may not be able to further access their data, even to download it -- this is a very complex topic.)

Though this seems not to be widely realized, Google+ no longer enforces "real name" requirements on users (only some completely rational Terms of Service restrictions to avoid serious abuses), and is now profile-friendly to users' own sexual orientations in a manner that really should be emulated by firms across the Web.

But the old trust fears, some of them trumped up propaganda from Google adversaries, others having at least some basis in fact -- about Google making sudden, seemingly inexplicable changes in terms and policies, altering or even rapidly deprecating services on which significant non-majority user communities depend -- are being reenergized seemingly as a sort of unforced error on Google's part.

And such errors can do real damage, both to users and Google. For most of the public does not view Google as a set of disparate and compartmentalized services, but rather as more of a unified whole, and perceived negative experiences with one aspect of the firm can easily drag down views of the firm overall, much to the delight of hardcore Google haters, by the way. This is why even if you don't care one iota about porn or other materials considered to be explicit, you should still be concerned about this Google policy change.

I care about Google's users and Google itself -- a firm that has accomplished amazing feats toward the betterment of the Internet and larger world over the course of a handful of years. I don't want to see those Google haters handed a gift package that can't help but assist their cause and attacks.

We could get into a lengthy discussion comparing the Blogger policies of long standing with those of YouTube, Google Ads, and the like, but while interesting, such analysis here and now would not be particularly relevant to the immediate situation at hand.

The bottom line is that a dramatic change of policy that negatively affects users who have been following the rules to date, is deserving of significant warning notice (not merely a month -- many of these sites have been operating for many years, some perhaps even since before Google's acquisition of Blogger in 2003). I would have recommended (absent some difficult to postulate legal urgency forcing a faster timeline) at least 90 days as an absolute minimum, ideally far longer.

That would be putting your users first, especially when deploying a policy change that will disrupt them greatly. And please, no excuses that "only a small percentage" of users would be affected. At Google scale even tiny percentages can represent a whole bunch of live human beings, and how you treat users who are easily marginalized can be representative of broader attitudes in very significant ways.

And notably, I would have offered a simultaneous clear and honest public explanation of why this total about-face on such a matter of direct free expression concerns had been deemed necessary or otherwise desirable. That's just common courtesy.

The world won't come to an end with this Blogger policy change by Google. There will still be virtually limitless sources for porn and other explicit imagery elsewhere, and most affected personal bloggers will find other platforms and over time perhaps rebuild their communities.

But the real story here isn't about sex or images or blogging at all. It's about how to treat people with respect, even when a particular group represents a small minority of total users, and even when they express controversial views via explicit materials. It could be argued that it's in these more contentious areas that treating users right is especially important.

Given the information I have at hand right now regarding this abrupt Blogger policy change and the circumstances surrounding it, I am very disappointed in the way Google has handled the overall situation.

I say this because I feel that Google is a great company -- and I not only believe that Google can do better with such matters -- I know that they can.

--Lauren--

Blog Update (February 27, 2015): Google's Gutsy Reversal: Explicit Content Blogger Ban Rescinded

Posted by Lauren at 09:44 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 22, 2015

Blaming the Internet for Terrorism: So Wrong and So Dangerous

You can almost physically hear the drumbeat getting louder. It's almost impossible to read a news site or watch cable news without seeing some political, religious, or "whomever we could get on the air just now" spokesperson bemoaning and/or expressing anger about free speech on the Internet.

Their claims are quite explicit. "Almost a hundred thousand social media messages sent by ISIL a day!" "Internet is the most powerful tool of extremists." On and on.

Now, most of these proponents of "controlling" free speech aren't dummies. They don't usually come right out and say they want censorship. In fact, they frequently claim to be big supporters of free speech on the Net -- they only want to shut down "extremist" speech, you see. And don't worry, they all seem to claim they're up to the task of defining which speech would be so classified as verboten. "Trust us," they plead with big puppy dog eyes.

But blaming the Net for terrorism -- which is the underlying story behind their arguments -- actually has all the logical and scientific rigor of blaming elemental uranium for atomic bombs.

Speaking of which, I'd personally be much more concerned about terrorist groups getting hold of loose fissile material than Facebook accounts. And I'm pretty curious about how that 100K a day social media messages stat is derived. Hell, if you multiply the number of social media messages I typically send per day times the number of ostensible followers I have, it would total in the millions -- every day. And you know what? That plus one dollar will buy you a cup of crummy coffee.

Proponents of controls on Internet speech are often pretty expert at conflating and confusing different aspects of speech, with a definite emphasis on expanding the already controversial meanings of "hate speech" and similar terms.

They also note -- accurately in this respect -- that social media firms aren't required to make publicly available all materials that are submitted to them. Yep, this is certainly true, and an important consideration. But what speech control advocates seem to conveniently downplay is that the major social media firms already have significant staffs devoted to removing materials from their sites that violate their associated Terms of Service related to hate speech and other content, and what's more this is an incredibly difficult and emotionally challenging task, calling on the Wisdom of Solomon as but one prerequisite.

The complexities in this area are many. The technology of the Net makes true elimination of any given material essentially impossible. Attempts to remove "terrorist-related" items from public view often draw more attention to them via the notorious "Streisand Effect" -- and/or push them into underground, so-called "darknets" where they are still available but harder to monitor towards public safety tracking of their activities.

"Out of sight, out of mind" might work for a cartoon ostrich with its head stuck into the ground, but it's a recipe for disaster in the real world of the Internet.

There are of course differences between "public" and "publicized." Sometimes it seems like cable news has become the paid publicity partner of ISIL and other terrorist groups, merrily spending hours promoting the latest videotaped missive from every wannabe terrorist criminal wearing a hood and standing in front of an ISIL flag fresh from their $50 inkjet printer.

But that sort of publicity in the name of ratings is very far indeed from attempting to control the dissemination of information on the Net, where information once disseminated can receive almost limitless signal boosts from every attempt made to remove it.

This is not to say that social media firms shouldn't enforce their own standards. But the subtext of information control proponents -- and their attempts to blame the Internet for terrorism -- is the implicit or explicit implication that ultimately governments will need to step in and enforce their own censorship regimes.

We're well down that path already in some ways, of course. Government-mandated ISP block lists replete with errors blocking innocent sites, yet still rapidly expanding beyond their sometimes relatively narrow original mandates.

And whether we're talking about massive, pervasive censorship systems like in China or Iran, or the immense censorship pressures applied in countries like Russia, or even the theoretically optional systems like in the U.K, the underlying mindsets are very much the same, and very much to the liking of political leaders who would censor the Internet not just on the basis of "stopping terrorism," but for their own political, financial, religious or other essentially power hungry reasons as well.

In this respect, it's almost as if terrorists were partnering with these political leaders, so convenient are the excuses for trying to crush free speech, to control that "damned Internet" -- provided to the latter by the former.

Which brings us to perhaps the ultimate irony in this spectacle, the sad truth that by trying to restrict information on the Internet in the name of limiting the dissemination of "terrorist" materials on the Net, even the honest advocates of this stance -- those devoid of ulterior motives for broader information control -- are actually advancing the cause of terrorism by drawing more attention to those very items they'd declare "forbidden," even while it will be technologically impossible to actually remove those materials from public view.

It's very much a lose-lose situation of the highest order, with potentially devastating consequences far beyond the realm of battling terrorists.

For if these proponents of Internet information control -- ultimately of Internet censorship -- are successful in their quest, they will have handed terrorists, totalitarian governments, and other evil forces a propaganda and operational prize more valuable to the cause of repression than all the ISIL social media postings and videos made to date or yet to be posted.

And then, dear friends, as the saying goes, the terrorists really would have won, after all.

Be seeing you.

--Lauren--

Posted by Lauren at 05:17 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 19, 2015

Google Glass vs. The USA's R&D Toilet

If you're a regular consumer of the computer industry trade press -- a strong stomach strongly recommended -- you've probably seen a bit of gloating lately about Google pulling their Google Glass device from most consumer marketing.

Mainstream media has picked up the drumbeat too, with even major publications like The New York Times very recently running stories purporting to explain why Google Glass has "failed" or how this is emblematic of Google's supposedly imminent fall.

Those stories sound pretty scary. They're also utterly wrong. And they're wrong in a way that exemplifies why so much of U.S. industry is in a terrible research and development (R&D) slump, and why Google should be congratulated for their "moonshots" -- not ridiculed.

Once upon a time -- not so long ago relatively -- there was a reasonable understanding in this country that long-term R&D was crucial to our technological, financial, and personal futures. That's long-term as in spending money on projects that might take a long time to pay off -- or might never pay off for the firms making the investments -- but that still might play crucial roles in our future going forward.

When we think about the foundation of modern R&D, it's typical for AT&T's Bell Telephone Laboratories (Bell Labs) to spring immediately to mind. Not the Bell Labs of today -- an emaciated skeleton of its former greatness -- but of the years before AT&T's 1984 Bell system beak-up divestiture and shortly thereafter.

The list of developments that sprang forth from the Labs is mind-boggling. If Lucent Technologies did nothing else when they took over Bell Labs and hastened its decline, at least they produced in 2000 this great music video celebrating the Labs' innovations over the many decades. Mentally start subtracting out items from the list shown in that video and watch how our entire modern world would crumble away around us.

Yet -- and this is crucial -- most of those Bell Labs technologies that are so much a part of our lives today were anything but sure bets at the time they were being developed. Hell, who needs something better than trusty old vacuum tubes? What possible use is superconductivity? Why would anyone need flexible, easy to use computer operating systems?

It's only with the benefit of 20/20 hindsight that we can really appreciate the genius -- and critically the willingness to put sufficient R&D dollars behind such genius -- that allowed these technologies to flourish in the face of contemporaneous skepticism at the time.

Much of that kind of skepticism is driven by the twin prongs of people who basically don't understand technology deeply, and/or by investors who see any effort to be a waste if it isn't virtually guaranteed to bring in significant short-term profits.

But we see again and again what happens when technology companies fall prey to such short-term thinking. Magnificent firms like Digital Equipment Corporation (DEC) vanish with relative rapidity into the sunset to be largely forgotten. Household names like Kodak flicker and fade away into shadows. And as noted, even the great Bell Labs has become the "reality show" version of its former self.

Nor is it encouraging when we see other firms who have had robust R&D efforts now culling them in various ways, such as Microsoft's very recent closing of their Silicon Valley research arm.

It probably shouldn't be surprising that various researchers from Microsoft, Bell Labs, and DEC have ended up at ... you guessed it ... Google.

So it also shouldn't be surprising why it's difficult not look askance at claims that Google is on the wrong path investing in autonomous cars, or artificial intelligence, or balloon-based Internet access -- or Google Glass.

Because even if one chooses inappropriately and inaccurately -- but for the sake of the argument -- to expound pessimistic consumer futures for those techs as currently defined, they will still change the world in amazingly positive ways.

Internet access in the future inevitably will include high altitude distribution systems. AI will be solving problems the nature of which we can't even imagine today. Many thousands of lives will be saved by improved driver assist systems even if you sullenly choose to assume that autonomous cars don't become a mass consumer item in the near future. And medical, safety, and a range of industrial applications for Google Glass and similar devices are already rapidly deploying.

This is what serious R&D is really all about. Our collective and personal futures depend upon the willingness of firms to take these risks toward building tomorrow.

We need far more firms willing to follow Google's R&D model in these regards, rather than being utterly focused on projects that might suck some coins quickly into the hopper, but do little or nothing to help their countries, their peoples, and the world in the long run.

Here in the U.S. we've willingly and self-destructively permitted short-term Wall Street thinking to flush much of our best R&D talent down the proverbial toilet.

And unless we get our heads on straight about this immediately, we'll be sending our futures and our children's futures down the same dark sewer.

We are far better than that.

Take care, all.

--Lauren--

Posted by Lauren at 03:12 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 07, 2015

Stop the Mass Hacks Attacks: Use Strong 2-Factor Authentication or Go to Jail!

I'm opposed to capital punishment for a whole slew of reasons, but every time I hear about a hack attack exposing masses of innocent persons' information, I find myself reconsidering that penalty -- not for the hackers, but for the irresponsible system administrators and their bosses who leave their operations so incredibly exposed when effective solutions are available -- and have been for quite some time.

OK, perhaps capital punishment for them would be going a bit too far, but I'll bet that spending a couple of years shackled in a cell with their new best friend "Bubba" would impress upon them the seriousness of the situation.

If we look at what is publicly known about the recent Sony hack, and the just announced and potentially much more devastating Anthem attack -- plus a whole list of other similar mass data thefts, a number of common threads quickly emerge.

First, these typically have nothing to do with failures of communications link security. They weren't attacks on SSL/TLS, they didn't involve thousands of supercomputer instances chomping on data for months to enable the exploits. Nor were they in any way the fault of weak customer passwords -- which are bad news for those customers of course, but shouldn't enable mass exploits.

By and large, what you keep hearing about these case is that they were based on the compromise of administrative credentials.

What this means in plain English is that an attacker managed to get hold of some inside administrator's login username and password, typically via email phishing or some other "social engineering" technique.

When these successful attacks are belatedly reported to the affected customers and the public, they're almost always framed as "incredibly sophisticated" in nature.

That's usually bull, a way to try convince people that "Golly, those hackers were just so incredibly smart that even our crack IT team didn't have a chance against them!"

Usually though, the attacks are incredibly unsophisticated -- they're simply relentless and keep pounding away until somebody with high level administrative access falls for them. Then, boom!

It's often argued that important financial and similar data should be kept encrypted -- and this is certainly true. But so long as system administrators have the need and ability to access data in the clear, encryption alone doesn't address these problems. Rigorous control and auditing systems to prevent unnecessary access to data en masse can also help ("Does Joe really need to copy 80 million customer records to a Dropbox account?") -- but this won't by itself solve the problem either.

The foundational enabling feature of so many successful mass attacks is failures of authentication protocols and processes in the broadest sense, and ironically, getting a handle on authentication is at least relatively straightforward.

Many firms aren't terribly interested in implementing even middling quality authentication, because they have faith in their firewalls to keep external attacks at bay.

This is an incredibly risky attitude. Over-reliance on firewalls -- that is, perimeter computer security -- is sucker bait, because once an intruder obtains high level administrative credentials, they can often plant software inside the firewall, and send data out in various ways with relative impunity. After all, most corporate firewalls are designed to keep outsiders out, not to wall insiders off from the public Internet.

To put this another way, a properly designed security system should in most instances be location agnostic -- employees should be able to work from home with the same (hopefully high) level of security they would have at the office. This isn't to say that secure deployment and administration of VPNs and associated systems are trivial, but they aren't rocket science, either.

Yet the real elephant in the room is at the basic authentication level, the usernames and passwords that most firms still rely upon as their only means of administrator authentication on their internal systems. And so long as this is the case, we're going to keep hearing about these mass attacks.

Yes, you can try force employees to choose better passwords. But passwords that are hard to remember get written down, and forcing them to be changed too often can make matters worse rather than better. The problem cannot be solved with passwords alone.

And -- "surprise, surprise, surprise" (as Gomer Pyle used to say -- go ahead, Google him) -- the technology to drastically improve the authentication environment not only exists, but is already in use in many applications that arguably are of a less critical nature in most cases than financial and insurance data.

I'm speaking of 2-factor or "multiple factor" authentication/verification systems, the requirement that system access is based on "something you know" and "something you have" -- not on just one or the other.

One of the best implementations of 2-factor is that deployed by Google, which offers a variety of means for fulfilling the "what you have" requirement -- text messages, phone calls, phone apps, and cryptographic security keys.

Different forms of multiple factor have varying relative levels of protection. For example, the use of "one time passwords" generated by apps or hardware tokens is not absolutely phishing-proof, but is a damned sight better than a conventional username and password pair alone. Security keys, which can interface with user systems via USB or in some cases NFC (Near Field Communications) technology, are the most secure method to date, and a single key can protect a whole variety of accounts -- even at different firms -- while still keeping the associated credentials isolated from one another.

And this brings us back to Bubba. While one never wants unnecessary mandates and legislation, sometimes you can't depend on industry to always "do the right thing" when it comes to security, when the intrinsic costs for the sloppy status quo are relatively low.

So while some countries and U.S. states do have laws about encryption of customer data, or notification of customers when breaches occur, there is little sense of closing the barn door before -- not after -- the cows have escaped.

After all, these careless firms usually have pretty easy outs when big breaches occur. They offer you free "credit monitoring" after the fact. Gee, thanks guys. They usually manage to pass along associated costs and fines to their customers. Another big thank you punch to the gut.

How to really get their attention?

Maybe they'd notice potential prison time for top executives of firms that deal primarily with sensitive consumer personal information (like banks, insurance companies, and so on) who voluntarily refuse to implement appropriate, modern internal security controls -- such as strong multiple factor logins -- and then suffer mass consumer data hacks as a result.

I'm not even arguing here and now that they must provide such systems to their individual customers -- though they really, seriously should. Nor am I suggesting such sanctions for failure of security systems that were deployed and operating competently and in good faith. After all, no security tech is perfect.

But I am putting forth the "modest proposal" that these types of firms be given some reasonable period of time to implement internal security systems including strong multiple factor verification, and if they refuse to do so and then suffer a mass data breach, the associated executives should be spending some time in the orange or striped jumpsuits.

Perhaps that prospect will light a fire under their you-know-whats.

Now, do I really believe it's likely that anything of this sort will actually come to pass? Hell no, after all, these are the kinds of firms that basically own our politicians.

But then again, if enough of these mass data thefts keep occurring, and enough people get seriously upset, the dynamic might change in ways that would have seemed fanciful only a few years earlier.

So despite the odds, my free advice to those execs would be to get moving on those internal multiple factor authentication systems now, even in the absence of legislative mandates requiring their use.

Because, ya' know, Bubba will be patiently waiting for you.

--Lauren--

Posted by Lauren at 09:40 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein