March 28, 2016

Google Questions & Unofficial Answers: "Is it true that France and other countries are now demanding the right to censor Google Search Results for everyone, everywhere on Earth? Isn't this 'Right To Be Forgotten' stuff getting out of hand?"

(This is a new entry from my recently formed Google+ Community "Google Questions & Unofficial Answers" -- located at: )

Man, oh man. To use the vernacular, "You ain't just whistling Dixie!" Yes and yes, the "Right To Be Forgotten" (RTBF) is going completely off the rails, and the French government is indeed now claiming that it has the power to control what Google users see around the the entire globe. But it's actually even worse than that. Much worse. Because other countries have already or will soon follow France's lead, triggering the most expansive and potentially damaging quagmire of censorship in the history of civilization -- a "race to the bottom" aimed at destroying free speech planetwide.

Of course, no country openly frames this as an attack on freedom of speech per se. Rather, they employ arguments such as claiming that they're trying to protect their citizens from "unfair" or "unflattering" third-party materials on the Internet (and by the way, irrespective of whether those materials are actually accurate or not).

Note that in the RTBF context these governments are not usually trying to block access to those actual Web pages containing the "undesirable" content. Rather, they've been aiming at search engines -- mainly Google to date -- in a disingenuous attempt to "hide" such pages by making them difficult to locate, even though they still exist and could be accessed if you knew the appropriate URLs.

Much like a prudish librarian who couldn't find a way to actually pull from shelves those books of which she didn't approve, and so settled instead on destroying the library catalog index cards that located those books, European Union (EU) countries and now a growing cavalcade of other nations are trying to "disappear" third-party search results from Google.

One of the basic tenets of government censorship -- likely reaching back to the caveman smearing out cave wall drawings that he found objectionable -- is that censorship virtually always expands to suck ever more oxygen out of free speech over time.

We can clearly see this phenomenon with RTBF, in ways that many observers (including myself) have long predicted would come to pass.

Without getting into all the legal definitions and details here, Google has to obey the laws in countries where it operates. But Google also has an interest -- both to protect itself and its users around the world -- in pushing back when any given country tries imposing conditions that actually exceed the statutory rights of that country, especially in the kinds of international contexts in which large Internet firms operate today. After all, why should the government of Country A have the right to dictate censorship over the populations of countries B through Z?

So when the EU began formulating policies for submission, processing, and determinations regarding RTBF requests, Google initially implemented a procedure for removing the results for approved requests from the individual location-specific versions of Google Search that users in those countries access by default (e.g., for France, that's

Almost immediately, EU censorship czars starting expanding their censorship demands. They began complaining that their citizens still had the ability to access other countries' localized versions of Google Search -- and to access the primary international itself -- as workarounds to find search results that their own governments deemed inappropriate for their delicate sensibilities.

Recently, Google agreed to go a step further to satisfy the continuing Orwellian demands of these governments, by actually blocking users in those countries from directly accessing other localized or the non-localized versions of Google, based on those users' origin IP addresses.

But (you guessed it!) France instantly started complaining yet again. They declared that Google's new restrictions weren't good enough, because French citizens could still use tools such as proxies and virtual private networks (VPNs) -- the same kinds of tools that oppressed people around the world use to bypass an array of tyrannical government Internet restrictions -- to see those verboten Google search results on the "forbidden" versions of Google Search.

This brings us to today, with France going even further. They're now asserting the right to force Google to remove search results essentially on demand for all versions of Google Search, around the entire planet. In other words, France claims the right to be the Internet censor not only for France, but for every other country as well, and so for the entire Earth's population.

In fact, it's not just France pushing this nightmarish vision. Early in the discussions of RTBF, the EU was already suggesting that they'd eventually want to move in this same direction. And countries like Russia and China -- both of whom already operate extensive domestic censorship regimes -- are following suit, practically drooling at the idea of globally removing Google Search results that are embarrassing or otherwise potentially disruptive to their dictatorial governments.

An Internet where each individual domestic government asserts the right to impose its own ideas of censorship on a global basis is intolerable, all the more so since we now depend on the Internet not only for so much of our communications in general, but for freedom of speech in particular -- with search engines like Google's being absolutely crucial to this entire Internet ecosystem. The result would inevitably be a "lowest common denominator" disaster for speech freedoms everywhere.

Again, all of this was predictable and predicted. The slippery slope was obvious from the very start. When Wile E. Coyote races off the end of a cliff while chasing the Road Runner, the fact that he remains suspended in midair for a few seconds doesn't mean that he won't shortly be taking the long fall down to the canyon floor.

Unless we put a stop to the expanding, increasingly abusive government censorship represented by the "Right To Be Forgotten" and other similar efforts rising around the world, free speech -- and all of us -- are in for a very painful and damaging fall indeed.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 08:50 AM | Permalink

March 25, 2016

Google Questions & Unofficial Answers: "I've been locked out of my Google account! What can I do? How can I prevent this in the future? HELP!"

(This is a new entry from my recently formed Google+ Community "Google Questions & Unofficial Answers" -- located at: )

Locked out of your Google account, eh? Yeah, that can definitely ruin your day, especially when you depend on Gmail and other Google services. The best way to avoid this problem is to take proactive steps ahead of time, which I'll get to in a moment. But for now, you just want back in, right?

Basically, whether your lockout is for entirely innocent reasons -- for example, forgetting your password -- or the account was suspended due to Google suspecting some form of illegal activity or terms of service violation, the entry point to the account recovery system is the forms that start here:

These will guide you through a series of questions that in most cases can get you up and running again in short order.

In practice, how painless this process will be depends on a number of factors.

For example, if your account has been suspended/closed by Google for what they suspect are illicit or otherwise improper activities, you will likely face an appeals process (even if you're actually innocent) that -- frankly -- is not ideal. This is an area where Google continues to improve but more work still needs to be done.

In simpler cases -- like the forgotten password -- the situation should be much smoother, especially if you've proactively (that is, previously) set up the account recovery options that Google offers here:

This allows you to specify a non-Google email address and/or mobile number to receive recovery codes that you can use to regain access to your Google account. Don't be paranoid about giving Google this information. They're not going to use it for anything other than helping you get back into your account or dealing with other account-related issues, and they won't give this info to anyone else.

Of course, you need to have set these up in advance of having access problems for these techniques to be useful -- that's why Google prompts you to do so at intervals. So set them up! If you don't have an alternate email or mobile phone, and can't get either, then at the very least you should ask a *highly trusted* friend or relative to allow you to use theirs for this purpose. This definitely isn't ideal and should be avoided if possible for obvious reasons, but can be a practical alternative that is better than not having any recovery data specified at all.

If you're locked out and haven't set up recovery email or phone numbers, things get tougher. Google has been wisely ending support for security questions, because -- well -- they suck for all kinds of reasons. I've long marveled at how so many people seem to feel that they must answer security questions honestly (who really needs to know the name of your first grade teacher?), but really, security questions have become more of a problem than solution. More info on this at:

In some cases, if you don't have security phone numbers or addresses on file when you lose account access, Google's recovery forms will alternatively ask you a series of questions about your account that typically only you would know the answers to. The problem is that you may not remember the correct answers to those questions either -- many people don't -- so you really do want to set up that recovery data ahead of time and avoid getting to this point, or to other even more complicated aspects of the recovery flow. So again, please do take the time to specify the recovery phone and/or email address in advance of needing them!

A couple of related points:

Did you know that you can download virtually all of your data you have stored with Google? Yep, email, files, all kinds of goodies. You can take them to another service or just store them on flash drives under your bed if you're into that kind of thing.

Take a look at the very cool Google Takeout system at:

You can also use this page for the same purpose:

I wish all firms provided a feature like this. Note however that this is not a way to download your data after you've been locked out of your account -- you need regular account access to use it. I have long felt that there should be some means for downloading your own data from Google in various situations even if you have lost access to the account for some reason not involving violations of law -- after all, it's still your data -- but this is admittedly a quite complicated issue that goes beyond the scope of this posting.

Finally, there's the unpleasant but unavoidable question of what happens to your data if you're incapacitated or even die. Google has two primary mechanisms for dealing with this. In the latter case, the form at:

is available to help deal with related issues after the fact.

But a much better alternative is to designate who should have access to your account in the future, if for some reason you are unable to access it for some period of time that you specify. This is handled through Google's Inactive Account Manager, which is here:

Again, this is a feature that you must proactively configure in advance for it to be useful.

When it comes to avoiding problems with accessing your Google account, planning ahead and taking advantage of the many features and tools that Google makes available for this purpose, makes a great deal of sense.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 12:40 PM | Permalink

March 21, 2016

Google Questions & Unofficial Answers: "Why does Google's YouTube seem so biased against ordinary users who upload videos?"

(This is a new entry from my recently formed Google+ Community "Google Questions & Unofficial Answers" -- located at: )

Why does Google's YouTube seem so biased against ordinary users who upload videos? I've unfairly had my videos blocked, received copyright strikes for my own materials, and even had my account suspended -- and it's impossible to reach anyone at YouTube to complain!

No, YouTube isn't biased against you -- not voluntarily, anyway. But it could definitely be argued that the copyright legal landscape -- particularly in the mainstream entertainment industry -- is indeed biased against the "little guys," and Google's YouTube must obey the laws as written. What's more, YouTube exists at the "bleeding edge" of the intersection of technology and law, where there's oh so much that goes bump in the night.

Let's begin with a fact. The amount of video being uploaded by users around the globe into YouTube at any given moment is staggering. As of July 2015 last year, something like 400 hours of video were being uploaded every minute (!), up from 300 the previous November. You can only imagine how much is pouring in today. That's one hell of a lot of video.

When we talk about uploaded videos, it's not just Internet bandwidth and disk space, it's also processing such as transcoding, sorting, analysis, and much more -- a whole array of activities triggered by every single "simple" YouTube upload.

At these kinds of data volume levels, pretty much everything has to be entirely automated for the overwhelmingly vast majority of videos. Manual processing, or manual responses to every or even most user queries or complaints, would be utterly impractical.

Obviously, money is an important aspect of YouTube. Content owners can earn revenue from user views of their content via ads, and Google generates income in the process. Since there are crooks around attempting to game that system (e.g., through false clicks and fake views), significant resources must be devoted to detecting and eliminating their impact as well. And YouTube operations don't come cheap. Outside of the uploading numbers above, think about all the people using YouTube-related resources to view videos at any given moment around the world. YouTube has over a billion users. Hundreds of millions of video hours are viewed via billions of YouTube clicks every day! And yes, Google wants to quite appropriately make a profit with YouTube as well.

This brings us to the real heart of the matter, where brilliant YouTube engineering meets The Twilight Zone -- in other words (drum roll, please): the legal system.

Here is a truism that may give you a headache to even think about: Many of the key aspects of YouTube that ordinary video uploaders consider to be the most bizarre and unfair are fundamental requirements to helping make YouTube possible at all!

Without YouTube's Content ID system that permits content owners to detect and monetize material they own that YouTube users have uploaded without permission or rights (e.g. popular music clips, to name but one of many examples), the likely outcome in the vast majority of cases would be complete takedowns under the DMCA (Digital Millennium Copyright Act) and other laws -- again, all of which Google/YouTube must comply.

A big plus for all of us from Content ID -- and key to keeping so many great videos available on YouTube -- is that it provides content owners with alternative options to total takedowns -- such as blocking only in certain geographic areas, monetization by the content owner rather than the unauthorized uploader, and so on. Similarly, the YouTube "three strikes" copyright violations policy, and other related Terms of Use policies, are themselves alternatives to the otherwise "most likely under the law" outcomes of immediate account terminations and even legal actions being taken against unauthorized uploaders by content owners.

None of this is to suggest that everything is butterflies and rainbows with Content ID. Like any system -- especially one that rides the thin line between technology and the volatile world of courts and lawyers -- it is not a perfect mechanism.

Crooks are continually trying to circumvent Content ID, to monetize videos over which they have no rights at all. I've made a sort of a hobby (yeah, I have some eclectic hobbies) of watching for these and reporting them to YouTube, but they're fairly easy to find. Just do a search on YouTube for pretty much any well known movie you've ever heard of, or most popular television shows even many decades old. Odds are you'll get lots of results, many of them seeming incredibly recent (like uploaded only a day or even just a few hours ago). Their large quantity suggests automated systems doing the uploading, usually to short-term "throw-away" accounts. If you actually try to view these videos, you'll typically find they're either nothing but raw spam -- displaying a link urging you to go to pirate sites to see the actual videos (where you're likely to be met with dubious credit card requests, malware, or worse), or monetized versions of the films or shows that have been altered in ways to try evade Content ID for as long as possible (the methods employed range from comparatively subtle, to horrific and bizarre visual distortions).

I hate these kinds of outright cheaters. They're trying to manipulate users into viewing spam and/or substandard perversions of the original films or programs, to try make money from content over which they have zero rights. YouTube is constantly working to fight them, but as a fan of classic movies and TV -- and of YouTube -- I personally feel that this category of copyright violators deserve no leniency.

The flip side is that there are situations where innocent users can become inappropriately targeted by Content ID or YouTube's copyright strikes reporting systems, via false positive Content ID hits, inappropriate copyright claims, and associated video demonetizations, takedowns, and account suspension/termination actions.

False claims against YouTube videos by content owners (or purported content owners), either purposefully and accidentally, both by design and sloppiness, occur every day. At YouTube scale, significant numbers of users are affected.

Such situations can get pretty "meta" too. There are all sorts of complexities surrounding figuring out what is actually "public domain" video, and how to deal with it. For example, think about the case of a content owner who uses public domain material in their own production, who then inappropriately claims rights against a third-party production that happened to use the same public domain clip as that claiming party. There are also cases of content claimants claiming the rights to materials completely produced by someone else, when that original material was partially or wholly incorporated into a larger production by the claimant. Your head spinning yet?

The concept of "fair use" -- tough even for the courts to deal with over the years -- is currently very difficult to incorporate in a useful form into scanning algorithms. Classical music has been a traditional problem as well. I personally know one YouTube user who performs long classical pieces on the piano, who has repeatedly had his YouTube videos demonetized because Content ID was trying to incorrectly claim his performances for other parties (this is a tough kind of case, because high quality performances of the same classical piano composition performed by two different excellent players can sound very similar). The poor guy actually was resorting to purposely incorporating errors into his piano recordings to try differentiate them when uploaded. Fortunately, YouTube has been making considerable technical strides in minimizing the problems that have affected him and other users related to these kinds of analysis.

Yes, when false claims or other similar problems hit an ordinary uploader's YouTube video, it could indeed seem like a Kafkaesque, automated forms ordeal to try resolve them. This situation is improving -- YouTube has actually been making dramatic improvements in their claim/counterclaim resolution flow -- although some problems in these respects still definitely persist.

Keep in mind -- as was noted earlier -- that at these video upload volume levels, most or all stages of the process must by definition involve automated rather than human-based analysis, but also crucially, the DMCA and other related laws impose an extremely limited range of options with which YouTube can deal with these situations and stay within those laws as YouTube must -- even in some cases when faced with abusers of the DMCA who make repeated false content claims.

Google knows there's a lot more work to do in this context. YouTube last month publicly announced (!topic/youtube/x3aGmn_MsqI ) the creation of a new team specifically to improve transparency, communications, and associated processes across a range of these issues. What we also need is reform of the entire copyright ecosystem to more fairly treat ordinary users instead of the "guilty until proven innocent" skew that current content ownership/copyright laws tend to require -- though given our current toxic political environment I wouldn't bet the farm on the likelihood of positive legal changes in this regard anytime soon.

The various Google/YouTube teams who breathe this stuff 24/7/365 try very hard to get it all right. But when it comes to video and the Internet, especially when one considers the multitude of complicated, multidisciplinary aspects, nothing is trivial nor comes easily in the associated technical, policy, or legal realms.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 09:03 AM | Permalink

March 06, 2016

When Google's Chrome Security Warnings Can Do More Harm Than Good

We begin with a stipulation. Google has world class privacy and security teams. I know many of the Googlers on those teams. There just ain't no better this side of Alpha Centauri. They want what they feel is best for Google's users.

That said, one of Google's institutional weaknesses -- improving but still very much present -- is (in my opinion) a recurring lack of clarity when it comes to understanding the impacts that some of their design decisions have on ordinary, non-techie users with busy lives that don't necessarily revolve around the nuts and bolts intricacies of these systems.

Last June, in "When Google Thinks They're Your Mommy" -- -- I noted some concerns regarding how particular aspects of the Chrome browser security model can negatively impact ordinary users.

Today let's look at a specific interesting and current related example.

The image below (please click over to if you're reading this on a text-based mailing list) is from a current main payments page of a little firm called AT&T -- in this case for payments from bank accounts, where you enter ACH routing and account numbers. We can assume that this page is used by many millions of persons on a routine basis.

We've spent many years training users to look for a little lock, or key, or green icon, or something similar up on the URL bar to indicate a "secure" page. So one can forgive users for being concerned when they notice that this particular page just shows an ordinary page icon -- nothing to indicate security beyond the URL starting with https. You can't see that icon in this screenshot, but it's the same one pointed at by the red arrow below.

If one happens to think of hovering over that icon on the URL bar, Chrome claims the page is not secure, and offers a "Security Overview" details link. The image below shows those details.

And if you really believe that an ordinary, non-technical user could make head or tail out of this information presented by Chrome in this case, I strongly suggest cutting back on your psychedelics dosage.

The summary near the top appears definitive: "This page is not secure" it proclaims in a dramatic orange font.

Below that we have an "explanation" -- the certificate is signed with SHA-1, which we can agree is considered weak by modern security standards. Scary!

But wait, let's read on. Further down the overview, we're told that the page is actually using a Secure TLS connection: "The connection to this site is using a strong protocol version and cipher suite" -- and there's even a reassuring green ball right there!

Further below, there's another green ball, and we're informed that "All resources on this page are served securely." Phew! This sounds like good news!

OK. Now, based on all this, to win the new car and a fantastic array of kitchen appliances, just answer this one simple question, yes or no:

Is this page under discussion actually "secure" ... ?

Time's almost up ...

Buzzz! Sorry, it was a trick question.

Because even though Google's "summary" judgment is that the page is not secure, the reality is that -- even if Google were not presenting us with seemingly contradictory detailed conclusions on their details pane -- the term "secure" is absolutely meaningless without appropriate context not only relating to systems and programs, but also to possible attackers and the probability of attacks in any given case.

It seems certain enough that Google actually knows this, but in their attempt to avoid really explaining, they've muddled the message into a scary conclusion that isn't useful to most people. The clue is that Google didn't mark the page with a red "X" padlock, or present even more terrifying warnings and/or access blocks.

So Google very likely appreciates that (to paraphrase "Miracle Max" from "The Princess Bride"), the page is actually "mostly" secure, at least as far as most users should be concerned. That is, the odds of anyone evil (leaving aside aspects of AT&T itself) getting hold of your data sent through that page are probably really low in a practical sense.

But as mentioned above, Google -- their top notch techies and policy folks notwithstanding -- still has difficulty explaining matters like this to ordinary people, so as in this case, they tend to fall back on "summary" statements like "this page is not secure" -- that can leave users unnecessarily confused, concerned, and as the saying goes, twisting slowly in the wind.

Because -- let's face it -- all this talk of SHA-1 certificates and TLS and battles between the green balls and orange triangles mean nothing to most people.

They just want to pay their damned bills. And while we can assume that AT&T will eventually update their certs, good luck finding the person at AT&T in charge of actually doing that. Sure, call up customer service and try get them to fix it. Go ahead. Everyone should have a hobby.

My bottom line here is of course not that Google should ignore security concerns. Far from it.

But I will assert that Google is in many cases failing to do a good job at explaining what's really going on in situations like this -- in language that most users can understand -- and that various aspects of their deployed security policies are confusing, arbitrary, and can cause users unnecessary alarm and confusion. I hear from such users every single day.

I am not claiming that this is easy stuff to get right. And -- believe me -- I understand why techies often aren't thrilled to be tasked with figuring out how to explain these matters in non-techie terms for the world at large.

This is all really important though. Even the best security concepts can be rendered impotent or worse, if their deployment is more opaque and contradictory, than transparent, consistent, and understandable.

And that's the truth.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 11:00 AM | Permalink

     Privacy Policy