Criminal Behavior: How Facebook Steals Your Security Data to Violate Your Privacy

One of the most fundamental and crucial aspects of proper privacy implementations is the basic concept of “data compartmentalization” — essentially, assuring that data collected for a specific purpose is only used for that purpose.

Reports indicate that Facebook is violating this concept in a way that is directly detrimental to both the privacy and security of its users. I’d consider it criminal behavior in an ethical sense. If it isn’t already actually criminal under the laws of various countries, it should be.

There’s been much discussion over the last few days about reports (confirmed by Facebook, as far as I can determine) that Facebook routinely abuses their users’ contact information, including phone numbers provided by users, to ad target other users who may never have provided those numbers in the first place. In other words, if a friend of yours has your number in his contacts and lets Facebook access it, Facebook considers your number fair game for targeting, even though you never provided it to them or gave them permission to use it. And you have no way to tell Facebook to stop this behavior, because your number is in someone else’s contacts address book that was shared and is under their control, not yours.

This abuse by Facebook of “shadow contacts” is bad enough, but is actually not my main concern for this post today, because Facebook is also doing something far worse with your phone numbers.

By now you’ve probably gotten a bit bored of my frequent posts strongly urging that you enable 2sv (two-step verification, 2-factor verification) protections on your accounts whenever this capability is offered. It’s crucial to do this on all accounts where you can. Just a few days ago, I was contacted by someone who had failed to do this on a secondary account that they rarely used. That account has now been hijacked, and he’s concerned that someone could be conducting scams using that account — still in his name — as a home base for frauds.

It’s always been a hard sell to get most users to enable 2sv. Most people just don’t believe that they will be hacked — until they are and it’s too late (please see: “How to ‘Bribe’ Our Way to Better Account Security” – https://lauren.vortex.com/2018/02/11/how-to-bribe-our-way-to-better-account-security).

While among the various choices that can be offered for 2sv (phone-based, authenticator apps, U2F security keys, etc.) the phone-based systems offer the least security, 2sv via phone-based text messaging still greatly predominates among users with 2sv enabled, because virtually everyone has a mobile phone that is text messaging capable.

But many persons have been reluctant to provide their mobile numbers for 2sv security, because they fear that those numbers will be sold to advertisers or used for some other purpose than 2sv.

In the case of Google, such fears are groundless. Google doesn’t sell user data to anyone, and the phone numbers that you provide to them for 2sv or account recovery purposes are only used for those designated purposes.

But Facebook has admitted that they are taking a different, quite horrible approach. When you provide a phone number for 2sv, they feel free to use it as an advertising targeting vector that feeds into their “shadow contact” system that I described above.

This is, as I suggested, so close to being criminal as to be indistinguishable from actual criminality.

When you provide a phone number for 2sv account security to Facebook, you should have every expectation that this is the ONLY purpose for which that phone number will be used!

By violating the basic data compartmentalization concept, Facebook actually encourages poor security practices, by discouraging the use of 2sv by users who don’t want to provide their phone numbers for commercial exploitation by Facebook!

Facebook will say that they now have other ways to provide 2sv, so you can use 2sv without providing a phone number.

But they also know damned well that most people do use mobile phones for 2sv. There are very large numbers of people who don’t even have smartphones, just simple mobile phones with text messaging functions. They can’t run authenticator apps. Security keys are only now beginning to make slow inroads among user populations.

So Facebook — in sharp contrast to far more ethical companies like Google who don’t treat their users like sheep to be fleeced — is offering vast numbers of Facebook users a horrible Hobson’s choice — let us exploit your phone number for ad targeting, or suffer with poor security and risk your Facebook account being hijacked.

This situation, piled on top of all the other self-made disasters now facing Facebook, help to explain why I don’t have a Facebook account.

I realize that Facebook is a tough addiction to escape. “All my friends and family are on there!” is the usual excuse.

But if you really care about them — not to mention yourself — you might consider giving Facebook the boot for good and all.

–Lauren–

How Google Documentation Problems Can Lead to Public Relations Nightmares

UPDATE (October 1, 2018): Please Don’t Ask! There Are No “Google Explainers”

– – –

Google has been going through something of a public relations nightmare over the last week or so, all related to a new feature that was added to their Chrome browser — that actually was an excellent, user-positive feature! (Please see: “Ignore the Silly Panic over Google Chrome’s New Auto-Login Feature” – https://lauren.vortex.com/2018/09/24/ignore-the-silly-panic-over-google-chromes-new-auto-login-feature).

After a massive backlash — which I personally feel was almost entirely uninformed and unnecessary — Google has announced that they’ll provide a way for users to disable this useful feature (my recommendation to users is to leave it enabled).

But how did we get to this point?

This entire brouhaha relates to Chrome browser sync, which enables the synchronization of data — bookmarks, passwords, browsing history, etc. — between multiple devices running Chrome. It’s a fantastically useful feature that unfortunately is widely misunderstood.

Part of the reason for the confusion is that it really is not well documented — the associated help materials can be misunderstood even by hardcore techies, and obviously this can be even more troublesome for non-technical users. This has been exacerbated by some aspects of the associated user interface, but Google documentation and other help resources are primarily at fault.

The triggering event for this Google PR mess was the false assumption by some observers that the new Chrome auto-login feature would automatically enable Chrome sync. It doesn’t, and it never did.

But how many Chrome users realize how much flexibility actually exists in the sync system?

For example, while the default settings will sync all categories of data, there are customization options that permit users to specify exactly which classes of data they wish to sync or not sync. I tend to sync bookmarks and not much else.

The main concern expressed about sync during this controversy relates to Google seeing your synced browsing history (which again, I stress has always been possible for users to disable in the sync system).

But how many users realize that you can choose to sync any or all data classes between your devices without Google being able to interpret them at all, simply by specifying a sync “pass phrase” that encrypts the data so that it only exists in unencrypted form on your own devices — not at Google. Doing this means that Google can’t provide various centralized value-added features, but that’s your choice!

If all of this had been better documented (in ways understandable to a wide variety of users of different technical skill levels) much or all of this entire controversy could have been avoided.

While Google has made significant strides in their help and documentation resources over the years, they still have a long, long way to go, especially when dealing with the non-technical users who make up a large and growing segment of their user population. 

I have long asserted that Google (and its users!) would greatly benefit from a new class of Google-related documentation and help systems, created and maintained specifically to assist all users — including especially non-technical users — to better understand these necessarily complex systems and environments. 

I would suggest that these include textual materials specifically written for this purpose, with supplemental video content as well. Call them “Google Explainers” or whatever, but in Google parlance I would assert that ongoing deficiencies in this area represent a “Code Yellow” (extremely important) class of issues for both Google and its users.

–Lauren– 

Ignore the Silly Panic over Google Chrome’s New Auto-Login Feature

UPDATE (September 27, 2018): How Google Documentation Problems Can Lead to Public Relations Nightmares

UPDATE (September 25, 2018): In response to complaints about this actually very positive and useful new feature, Google has announced that an upcoming version of Chrome will provide an option for users to disable this functionality. But I recommend that you leave it enabled — I certainly will.

– – –

You may have seen stories going around over the last couple of days with various observers and so-called “experts” going all wacko panicky over a new feature in Google’s Chrome that automatically logs you into the browser when you log into a Google account.

In reality, this is a major privacy-positive move by Google, not any kind of negative as those breathless articles are trying to make you believe!

Over time, many users — especially in situations where multiple people use the same computer — have come to me confused about who was really logged into what. They’d login to their own Google accounts but later discover that the browser was still logged in as someone else entirely, not only causing confusion, but the potential for significant user errors as well.

I applaud Google changing this. It improves user privacy and user security, by helping to assure that the browser and Google Accounts are using the same identities, and that you’re not accidentally screwing around with someone else’s browser data.

Some panicky observers are loudly proclaiming that they never want to login to the browser. They seem on the whole to be rather confused. You can still use the browser as Guest. You can still switch user identities on the browser via the “Manage People” function in settings.

The key functionalities of browser login are to keep track of different users’ browser settings, and to provide sync capabilities. And the sync system isn’t automatically turned on by these new changes. If you want to sync bookmarks or passwords or whatever, you still need to enable this explicitly and you still have complete control over what is being synced, just like before.

Google should be getting applause for this new Chrome auto-login feature, not silly complaints.

Kudos to the Chrome team.

–Lauren–

More Bull from the Google Haters: Search Results and Trump’s Travel Ban

Here we go again. There are new stories today being breathlessly spouted by the alt-right, and being picked up by mainstream media, about internal Google emails showing employees discussing possible ways to “leverage” search results to help push back against Trump’s racist travel ban in January 2017, shortly after his inauguration.

The key aspect to note about this media brouhaha is that NONE of those ideas were EVER implemented. And the discussions themselves include participants noting why they shouldn’t be.

These discussions were the personal thoughts of individual Googlers, who are encouraged by Google to speak as openly as possible internally to help assure that Google has a wide range of opinions as input to decision-making on an ongoing basis.

I experienced this firsthand during the period ending several years ago when I consulted to Google. I had never seen such an open exchange of ideas at any large firm before. I was absolutely in awe of this — and actively participated in many internal discussions — because such interchange is an incredibly important asset — not only to Google, but to its users and to the world at large.

You want to avoid whenever possible having employees self-censoring internally about controversial matters. You want the maximum practicable interchange of ideas, many of which by definition will never actually be implemented.

We’d frankly have a much better world if such open internal discussions took place at all firms and other organizations.

What’s so appalling about this situation is that there are (or were) individuals inside Google who would purposely leak such internal discussions, obviously in the hopes of generating exactly the kinds of fanatical Google hate being demonstrated by the alt-right and their allies, and to try stifle the kinds of open internal discussions that are so important to us all.

–Lauren–

What We See on the Leaked TGIF Video Makes Us Proud of Google

Ever since an online right-wing rag recently released a leaked copy of a corporate “TGIF” meeting at Google (recorded a couple of days after the election of Donald Trump), I’ve been receiving emails from various Trump supporters pointing at various short, out of context clips from that video to try make the argument that a vast, conspiratorial political bias by Google is on display.

This is utter nonsense. And a viewing of the entire now public meeting recording (https://lauren.vortex.com/g-tgif) not only reveals a lack of bias, but should inspire a completely different set of reactions — namely confidence and pride.

For in this video we see exactly what I for one would have hoped to see from the leaders of a powerful corporation under such circumstances — expressions of personal concern, but a clear determination not to permit personal feelings to skew or bias Google search engine or other services.

As I watched this video, I found myself almost constantly nodding my head in agreement. Frankly, if I had been up there on that stage I would have been sorely tempted to state my concerns regarding the election’s outcome in somewhat stronger language. And let’s face it, events in the ensuing nearly two years since that election have proven these kinds of concerns to have been utterly justified.

The motives of the Google or ex-Googler who originally leaked this TGIF video are obvious enough — to try feed into the alt-right’s false narratives of claimed political bias at Google. 

In this respect that person failed miserably, because any fair-minded individual viewing the entire video cannot fail to see corporate leaders explicitly keeping their personal feelings separate from corporate policies. 

That’s not to say that this nefarious leaker hasn’t done real damage inside Google. Reportedly, internal access to TGIF videos has been greatly restricted in the wake of the leak. That’s bad news all around — open discussion of sometimes controversial issues inside Google is key not only to Google’s success, but is important to Google’s users and the global community as well.

And of course the leaker has now spawned a plethora of additional right-wing articles attacking various Google execs, and a range of new wacky false conspiracy theories, including the bizarre notion that the beanie propeller hats typically worn by new Google employees are actually some kind of creepy cult symbolism. Give me a break! Apparently these conspiracy idiots never saw “Beany & Cecil” (https://www.youtube.com/watch?v=cMdReHP9cb0).

Google — like all firms — is made up of human beings, and a person hasn’t walked this planet who qualifies as perfect. But when I watch this video, I see a group of people working very hard to do the right thing, to keep Google firmly on an unbiased and even keel despite personal disappointments.

And yes, that makes me very proud of Google and Googlers.

–Lauren–

Google Backs Off on Unwise URL Hiding Scheme, but Only Temporarily

In previous posts, including “Here’s How to Disable Google Chrome’s Confusing New URL Hiding Scheme” (https://lauren.vortex.com/2018/09/07/heres-how-to-disable-google-chromes-confusing-new-url-hiding-scheme), I’ve noted the serious security and other problems related to Google Chrome’s new policy of hiding parts of site URLs.

Google has now — sort of, temporarily — backed off on these changes.

In a post over on the Chromium blog, at:

https://bugs.chromium.org/p/chromium/issues/detail?id=883038

they note that URL subdomain hiding (Google uses the term “elide” — how often do you see that one?) is being rolled back in Chrome M69, but the post also says that they plan to begin hiding — I mean “eliding” — www again in M70, but not “m” (no doubt because they realized what a potential mess that made over on Tumblr). They also say that they’ll initiate a discussion with standards bodies about this to reserve “www or m” as hidden subdomains.

The comments on that Chromium post appear to be virtually universally opposed to Google’s hiding any elements of URLs. At the very least, it’s obvious that Google should not begin such URL modifications again until after such a time (if ever) that standards bodies have acted in these regards, and I would argue that these bodies should not do so in the manner that Google is now pushing.

The www and m subdomains have been integral parts of the user experience on the Web for decades. Tampering with them now (especially www) makes no sense, and (along with the other action that Google took at the same time — hiding the crucial http:// and https:// prefixes that are key signals regarding communications security) just puts users in an even more vulnerable position, as I discussed in “Chrome Is Hiding URL Details — and It’s Confusing People Already!” (https://lauren.vortex.com/2018/07/10/chrome-is-hiding-url-details-and-its-confusing-people-already).

We can certainly have a vibrant discussion regarding additional signals that could help users to detect phishing and other URL-related attacks, but any and all changes to URL displays (including involving http, https, m, www, and so on) should only take place if and after there is broad community agreement that such changes are actually user positive.

Google should completely cease all of these URL changes, permanently, unless such criteria are met.

–Lauren–

Verizon’s 5G Home Broadband Has a Rough Start

A few days ago, Verizon Wireless announced with great fanfare that people in their initial handful of supported cities (including here in L.A.) could use a locator site as of this morning to check for availability of the new Verizon Wireless 5G Home Broadband service, which supposedly touts some impressive specs. Actually, we should call it “5G” with the quotes made obvious, since it’s not really a standardized 5G yet, but let that pass for now.

The locator site has been present at least since that announcement but said that you couldn’t actually check addresses until something like 5 AM PDT this morning. So this morning I decided to check my address. I didn’t expect it to be covered — I heard rumors that Verizon’s initial coverage of L.A. would be very small, perhaps centered on downtown L.A., and I’m literally in the other end of the city in the distant reaches of the San Fernando Valley.

The site apparently did enable its address checking functionality this morning. Well, in theory, anyway.

The page has an annoying overlay curtain effect when you touch it (that was there several days ago as well) but as of right now the “Check availability” link immediately punches you through to another page saying that service is not available at your address — before you’ve even entered a physical address.  Are they trying to guess your approximate location based on your IP address? Naw, that would never work — too prone to error, and think of all the people using mobile devices who all appear to be coming from carrier gateways.

Hmm. There is a “change address” link — and you can actually enter your address at that one. Oops, still says not available at your address. But, wait a second. Whether you enter your address directly or not, there’s a note under that unavailability announcement:

Server is temporarily down, couldn’t able to process the request currently.

Wow, this is starting to feel like a phishing site with a backend coded by someone who clearly wasn’t a native English speaker.

And checking again just now, the site is still in this condition.

Not an auspicious beginning.

–Lauren–

EU Preliminarily Passes Horrific Articles 11 & 13 — Here’s How to Fight Back!

By a vote of 438 to 226, the massively confused and lobbyists-owned EU Parliament has preliminary passed horrific Article 11 and Article 13, aimed at turning ordinary users into the slaves of government-based Internet censorship and abuse.

The war isn’t over, however. These articles now enter a period of negotiation with EU member states, and then are subject to final votes next year, probably in the spring.

So now’s the time for the rest of the world to show Europe some special “tough love” — to help them understand what their Internet island universe will look like if these terrible articles are ever actually implemented.

Article 11 is an incredibly poorly defined “link tax” aimed at news aggregators. If Article 11 is implemented, the reaction by most aggregators who have jurisdictional exposure to the EU (e.g., EU-based points of presence) will not be to pay the link taxes, but rather will be to completely cease indexing those EU sites.

Between now and the final votes next year, news aggregation sites should consider temporarily ceasing to index those EU sites for various periods of time at various intervals, to give those sites a taste of what happens to their traffic when such indexing stops, and what their future would look like under Article 11.

Then we have Article 13’s massive, doomed-to-disaster content filtering scheme, which would be continually inundated with false matches and fake claims (there are absolutely no penalties under Article 13 for submitting bogus claims). While giant firms like Google and Facebook would have the resources to implement Article 13’s mandates, virtually nobody else could. And even the incredibly expensive filtering systems built by these largest firms have significant false positive error rates, frequently block permitted content, and cost vast sums to maintain.

A likely response to Article 13 by many affected firms would be to geoblock EU users from those company’s systems.  That process can begin now on a “demonstration” basis. The IP address ranges for EU countries can be easily determined in an automated manner, and servers programmed to present an explanatory “Sorry about that, Chief — You’re in the EU!” message to EU users instead of the usual services. As with the Article 11 protest procedure noted above, these Article 13 IP blocks would be implemented at various intervals for various durations, between now and the final votes next year.

The genuinely sad part about all this is that none of it should be necessary. Article 11 and 13 mandates will never work as their proponents hope, and if deployed will actually do massive damage not only to EU (and other) users at large, but to the very constituencies that have lobbied for passage of these articles!

And that’s a lose-lose situation in any language.

–Lauren–

“The EU’s (Internet) Island” (To the tune of “Gilligan’s Island”)

UPDATE (September 12, 2018): EU Preliminarily Passes Horrific Articles 11 & 13 — Here’s How to Fight Back!

– – –

In honor of the EU’s horrific “Article 11” and “Article 13” — In the hope that they don’t pass, and that these lyrics don’t come to pass as reality.

– – –

“The EU’s (Internet) Island”
(To the tune of “Gilligan’s Island”)
Lauren Weinstein – 11 September 2018

Just sit right back and you’ll hear a tale,
A tale of a fateful trip.
When the EU tried to wreck the Net,
And just sunk their own sad ship.
Their ideas were a link tax few would pay,
And content censorship tools.
So the EU voted to proceed,
With a plan made by fools,
A plan made by fools!

(Lightning and Thunder!)

It didn’t work out like they hoped,
The world cut the EU off.
Fake claims filled the content filters fast,
And EU users were lost,
The EU users were lost!

Now the EU’s been chopped from the Net,
Like a lonely desert isle.
With Luxembourg,
And Brussels too,
And Frankfurt,
And yes Strasbourg!
The Hague as well,
And the rest,
Are here on the EU’s Isle!

<End>

YouTube’s Memory Miracle

The key reason why you’ll find me “from time to time” expressing criticism of various YouTube policies, is simply because I love the platform so very much. If it vanished tomorrow, there’d be a gap in my life that would be very difficult to repair.

So let’s put aside for the moment issues of hate speech and dangerous dares and YouTube’s Content ID, and revel for a bit in an example of YouTube’s Memory Miracle.

A few minutes ago, a seemingly unrelated Google query pulled up an odd search result that I suddenly recognized, a YouTube video labeled “By Rocket to the Moon.” YES, the name of a children’s record I played nearly into groove death in my youth. It’s in my old collection of vinyl here for sure somewhere, but I haven’t actually seen or heard it in several decades at least:

By Rocket to the Moonhttps://www.youtube.com/watch?v=9acg_P23oHY

Little bits and pieces of the dialogue and songs I’ve recalled over the years, in particular a line I’ve quoted not infrequently: “Captain, captain, stop the rocket. I left my wallet in another suit, it isn’t in my pocket!” As it turns out, I learned today that I’ve been quoting it slightly wrong, I’ve been saying “in my other suit” — but hell, close enough for jazz!

And speaking of jazz, I also realized today (it would have meant nothing to me as a child) that the jazzy music on this record was composed by the brilliant Raymond Scott and performed by none other than the wonderful Raymond Scott Quintette. You likely don’t recognize the names. But if you ever watched classic Warner Brothers cartoons, you will almost certainly recognize one of the group’s most famous performances, of Scott’s “Powerhouse” (widely used in those cartoons for various chase and machine-related sequences):

Powerhouse: https://www.youtube.com/watch?v=YfDqR4fqIWE

I’m obviously not a neurobiologist, but I’ve long suspected that what we assume to be memory “loss” over time with age is actually not usually a loss of the memories themselves, but rather a gradual loss or corruption of the “indexes” to those memories. Once you get a foothold into old buried memories through a new signal, they’ll often flow back instantly and with incredible accuracy. They were there all along!

And that’s why I speak of YouTube’s memory miracle. Old songs, old TV shows, even old classic commercials. You thought you forgot them eons ago, but play them again on YouTube even after gaps of decades, and full access to those memories is almost instantly restored.

In the case of this old record, I had just played a few seconds from YouTube today when the entire production came flowing back — dialogue, song lyrics, all of it. I was able to sing along as the words “popped in” for me a few seconds ahead of what I was hearing. (This leads to another speculation of mine relating to the serial nature of memories, but we’ll leave that discussion for a future post.)

YouTube had in a few seconds recreated — or at least uncovered and surfaced — the lost index that restored access to an entire cluster of detailed memories.

OK, so it’s not really a miracle. But it’s still wonderful.

Thanks YouTube!

–Lauren–

Here’s How to Disable Google Chrome’s Confusing New URL Hiding Scheme

UPDATE (September 17, 2018): Google Backs Off on Unwise URL Hiding Scheme, but Only Temporarily

– – –

A couple of months ago, in “Chrome Is Hiding URL Details — and It’s Confusing People Already!” (https://lauren.vortex.com/2018/07/10/chrome-is-hiding-url-details-and-its-confusing-people-already), I noted the significant problems already being triggered by Google’s new URL modification scheme in Chrome Beta. Now that these unfortunate changes have graduated to the current standard, stable version of Chrome, more complaints about this are pouring in to me from many more users.

I don’t normally recommend altering Chrome’s inner sanctum of “experimental” settings unless you’re a hardcore techie who fully understands the implications. But today I’m making an exception and will explain how you can disable these new URL handling behaviors and return Chrome to its previous (safer and logical) URL display methodology — at least until such a time as Google decides to force this issue and removes this option.

Ready? Here we go.

In the URL bar at the top of the browser (technically, the “omnibox”), type:

chrome://flags

then hit ENTER. You’ll find yourself in Chrome’s experimental area, replete with a warning in red that we’ll ignore today. In the “Search flags” box (just above the word “Experiments”), type:

steady

In the section labeled “Available” you should now find:

Omnibox UI Hide Steady-State URL Scheme and Trivial Subdomains

Obviously, the Chrome team and I have a difference of opinion about what is meant by “trivial” in this context.  Anyway, directly to the right you should now see an option box. Click the box and change the setting from:

Default

to:

Disabled

A large button labeled RELAUNCH NOW should be at the lower right. Go ahead and click it to restart the browser to make this change take effect immediately (if you have anything important in other open tabs, instead relaunch on your own later to protect your work).

That’s all, folks! The familiar URL behaviors should be restored, for now anyway.

Be seeing you.

–Lauren–

How Google Could Dramatically Improve the World’s Internet Security

UPDATE (October 8, 2021): It was just announced that Google will be giving free security keys to 10,000 particularly at risk Google users. Excellent to see this important step being taken!

– – –

It’s obvious that the security of SMS mobile text messaging as the primary means for 2-factor account authentications is fatally flawed. The theoretical problems are nothing new, but the dramatic rise in successful attacks demonstrates that the cellular carriers are basically inept at protecting their subscribers from SIM hijacking and other schemes (sometimes enabled by crooked insiders within the carrier firms themselves) that undermine the security of these systems.

While other 2-factor mechanisms exist, including authentication apps of various sorts, text messaging remains dominant. The reason why is obvious — pretty much everyone has a cell phone already in hand. Nothing else to buy or install.

The correct way to solve this problem is also well known – FIDO U2F security keys. Google has noted publicly that after transitioning their workforce to security keys from OTP (one-time password) systems, successful phishing attacks against Googlers dropped to zero.

Impressive. Most impressive.

But in the world at large, there’s a major problem with this approach, as I discussed recently in: “Prediction: Unless Security Keys Are Free, Most Users Won’t Use Them” (https://lauren.vortex.com/2018/08/02/prediction-unless-security-keys-are-free-most-users-wont-use-them).

I have also previously noted the difficulties in convincing users to activate 2-factor authentication in the first place: “How to ‘Bribe’ Our Way to Better Account Security” (https://lauren.vortex.com/2018/02/11/how-to-bribe-our-way-to-better-account-security).

Essentially, most users won’t use 2-factor unless there are strong and obvious incentives to do so, because most of them don’t believe that THEY will ever be hacked — until they are! And they’re unlikely to use security keys if they have to buy them as an extra cost item.

Google is one of the few firms with the resources to really change this for the better.

Google should consider giving away security keys to their users for free.

The devil is in the details of course. This effort would likely need to be limited to one free key per user, and perhaps could be limited initially to users subscribing to Google’s “Google One” service (https://one.google.com/about). Please see today’s comments for some discussion related to providing users with multiple keys.

Mechanisms to minimize exploitation (e.g. resale abuse) would also likely need to be established.

Ultimately, the goals would be to provide real incentives to all Google users to activate 2-factor protections, and to get security keys into their hands as expeditiously as is practical.

Perhaps other firms could also join into such an effort — a single security key can be employed by a user to independently authenticate at multiple firms and sites.

It’s a given that there would indeed be significant expenses to Google and other firms in such an undertaking. But unless we find some way to break users out of the box of failed security represented especially by text messaging authentication systems, we’re going to see ever more dramatic, preventable security disasters, of a kind that are already drawing the attentions of regulators and politicians around the world.

–Lauren–

Google Admits It Has Chinese Censorship Search Plans – What This Means

This post is also available in Google Docs format .

After a painfully long delay, Google admitted at an internal company-wide meeting yesterday that it indeed has a project (reportedly named “Dragonfly”) for Chinese government-controlled censored search in China, but asserts that it is nowhere near ready for deployment and is subject to a range of possible changes before deployment (I’ll add, assuming that it ever actually launches).

Some background:

“Google Must End Its Silence About Censored Search in China” – https://lauren.vortex.com/2018/08/09/google-must-end-its-silence-about-censored-search-in-china

“Google Haters Rejoice at Google’s Reported New Courtship of China” –  https://lauren.vortex.com/2018/08/03/google-haters-rejoice-at-googles-reported-new-courtship-of-china

“Censored Google Search for China Would Be Both Evil and Dangerous!” – https://lauren.vortex.com/2018/08/01/censored-google-search-for-china-would-be-both-evil-and-dangerous

While this was an internal meeting, it apparently leaked publicly in real time, and was reportedly terminated earlier than planned when it was realized that it was being live-tweeted to the public by somebody watching the event.

The substance of the discussion is unlikely to appease Googlers upset by these plans. For all practical purposes, management appears to be justifying the new project using much the same terms (e.g., some Google is better than no Google”) used to try justify the ill-fated 2006 entry of Google into censored Chinese search, which Google abandoned in 2010 after continuing escalation of demands by the Chinese government, and Chinese government hacking of Google systems.

Given the rapid recent escalation of Internet censorship and associated human rights abuses by China’s “President for Life” Xi, there’s little reason to expect the results to be any different this time around — in fact they’re likely to go bad even more quickly, making Google by definition complicit in the human rights abuses that flow from the Chinese government’s censorship regime.

The secrecy surrounding this project — few Googlers even knew of its existence until leaks began circulating publicly — was explained by Google execs as “typical” of various Google projects while in their early, very sensitive stages.

This alone suggests a serious blind spot in Google management’s analysis. Such logic might hold true for a “run-of-the-mill” new service. But keeping a project such as Chinese censored search under such wraps within the company — a project with vast ethical ramifications — is positively poisonous to internal company trust and moral when the project eventually leaks out — as we’ve seen so dramatically demonstrated in this case.

That’s why the (now public) Googler petition — reportedly signed by well over a 1000 Googlers and increasing — is so relevant and important. It wisely calls for the establishment of formal frameworks inside Google to deal with these kinds of ethical issues, giving rank and file employees a “seat at the table” for such discussions. 

It also notably calls for the creation of internal “ombudspersons” roles to be directly engaged in these corporate ethical considerations — something that I’ve been publicly and privately advocating to Google over at least the last 10 years.

Irrespective of whether or not Google relaunches Chinese-government controlled censored search, the kinds of efforts proposed in the Googler petition would be excellent steps toward the important goal of improving Google’s ethical framework for dealing with both controversial and more routine projects going forward.

Leaks threaten the culture of internal openness that has been an important hallmark at Google since its creation 20 years ago (with this new Chinese government-censored search project being an obvious and ironic exception to Google’s open internal culture).

This internal openness is crucial not only for Google, but also for its users and the community at large as well. Vibrant open discussion internally at Google (which I’ve witnessed and participated in myself when I consulted to them a number of years ago) is what helps to make Google’s products and services better, and helps Google to avoid potentially serious mistakes.

But for any organization, when policy-related leaks occur of the sort that we’ve witnessed recently regarding Google and China, it strongly suggests that the organization does not have well functioning or adequate internal staff-accessible processes in place to appropriately deal with these higher pressure matters. Again, the kinds of proposals in the Googler petition would go a long way toward alleviating this situation.

These recent developments have brought Google to a kind of crossroads, a “moment of truth” as it were. What is Google going to be in its next 20 years? What kinds of roles will ethics play in Google’s decisions going forward? These are complex questions without simple answers. Google has a lot of serious work ahead in answering them to their own and the public’s personal and political satisfactions. 

But Google is great at dealing with hard problems, and I believe that they’ll work their way to appropriate answers in these cases as well.

We shall see what transpires in the fullness of time.

–Lauren–

Beware the Fraudulent Blog Comments Scams!

A quick heads-up! While I’ve routinely seen these from time to time, there seems to be a major uptick in what are apparently fraudulent comment scam attempts here on my blog. They never get published since I must approve all comments before any appear, but their form is interesting and there likely is at least some human element involved, since they’re able to pass the reCAPTCHA “Are you a human?” test.

Here’s how the scams operate. It’s typical for blogs that support comments (whether moderated or not) to often permit the sender to include their name, email address, and a contact URL with their comment submission. My blog only will display their specified name, and of course only if I approve the comment.

But many blogs include all of that information in the posted comments, and many blogs don’t moderate comments, or only do so after the fact if there are complaints about individual published comments.

The scam comments themselves tend to fall into one of two categories. They may be utterly generic, e.g.: “Thanks for this great and useful post!”

Or they may be much more sophisticated, and actually refer in a more or less meaningful way — sometimes in surprising detail — to the actual topic of the original post.

The email addresses provided with the comments could be pretty much anything. What matters is the URLs that the comment authors provide and that they hope you will publish: The scammers always provide URLs pointing at various fake “technical support” addresses.

These cover the gamut: Google, Yahoo!, Microsoft, Outlook — and many more.

And you never want to click on those links, which almost inevitably lead to the kind of fake technical support sites that routinely scam unsuspecting users out of vast sums around the world every day.

It’s possible that these scam comment attempts are made in bulk by humans somewhere being paid a couple of cents per effort. Or perhaps they’re partly human (to solve the reCAPTCHA), and partly machine-generated.

In any case, if you run a blog, or some other public-facing site where comments might be submitted, watch out for these. Don’t let them appear on your sites! Your legitimate users will thank you.

–Lauren–

Fixing Google’s Gmail Spam Problems

The anti-spam methodology used by Google’s Gmail system — and most other large email processing systems — suffers a glaring flaw that unfortunately has become all too traditionally standard in email handling.

One of the most common concerns I receive from Google users is complaints that important email has gone “missing” in some mysterious manner.

The mystery is usually quickly solved — but a real solution is beyond my abilities to deploy widely on my own.

The problem is the ubiquitous “Spam” folder, a concept that has actually helped to massively increase the amount of spam flowing over the Internet.

Many users turn out to not even realize that they have a Spam folder. It’s there, but unnoticed by many.

But even users who know about the Spam folder tend to rarely bother checking it — many users have never looked inside, not even once. Google’s spam detection algorithm is so good that non-spam relatively rarely ends up in the Spam folder.

And therein lies the rub. Google’s algorithms are indeed good, but of course are not perfect. False positives — important email getting incorrectly relegated to the Spam folder — can be a really big deal — especially when important financial notifications are concerned, for example.

In theory, routine use of Gmail’s “filter” options could help to tame this problem and avoid some false positives being buried unseen. But the reality is that many of these important false positives are not from necessarily expected sources, and many users don’t know how to use the Gmail filter system — and in fact may be totally unaware of its existence. And frankly, the existing Gmail filtering user interface is not well suited to having large and growing numbers of filters of the sort needed to try deal with this situation (either from the standpoint of actual spam or false positives) — trust me on this, I’ve tried!

So could we just train users to routinely check the Spam folder for important stuff that might have gotten in there by accident? That’s a tough one, but even then there’s another problem.

Many Gmail users receive so much spam — much of it highly repetitive — that manually plowing through the Spam folder looking for false positives is necessarily time consuming and prone to the error of missing important items, no matter how careful you attempt to be. Ask me how I know!

This takes us to the intrinsic problem with the Spam folder concept. Gmail and most other major mail systems accept many of the spam emails from the creepy servers that vomit them across the Net by the billions. Then they’re relegated to users’ spam folders, where they help to bury the important non-spam emails that shouldn’t be in there in the first place.

Since Google accepts much of this spam, the senders are happy and keep sending spam to the same addresses, seemingly endlessly. So you keep seeing the same kinds of spam — ranging from annoying to disgusting — over and over and over again. The sender names may vary, the sending servers usually have obviously bogus identities, but (unlike some malware that Google rejects immediately) the spam keeps getting delivered anyway.

The solution is obvious, even though nontrivial to implement at Google Scale. It’s a technique used by many smaller mail systems — my own mail servers have been using variations of this technique for decades.

Specifically, users need to be able to designate that particular types of spam will never be delivered to them at all, not even to the Spam folder. Attempts at delivering those messages should be rejected at the SMTP server level — we can have a discussion later about the most appropriate reject response codes in these circumstances, there are various ways to handle this.

Specifying the kinds of spam messages to be given this “delivery death penalty” treatment is nontrivial, both from a user interface and implementation standpoint — but I suspect that Google’s AI resources could be of immense assistance in this context. Nor would I assert that a “real-time” reject mechanism like this would be without cost to Google — but it would certainly be immensely useful and user-positive.

The data from my own servers suggests that once you start rejecting spam email rather than accepting it, the overall level of spam attempts ultimately goes down rather than up. This is especially true if spam attempts are greeted with a “no such user” reject even when that user actually exists (yes, this is a controversial measure).

There are certainly a range of ways that we could approach this set of problems, but I’m convinced that the current technique of just accepting most spam and tossing it into a Spam folder is not helping to stop the scourge of spam, and in fact is making it far worse over time.

–Lauren–

Location Tracking: Google’s the One You DON’T Need to Worry About!

I must keep the post brief today but this needs to be said. There are a bunch of stories currently floating around in the news globally, making claims like “Google tracks your location even when you tell it not to!” and other alarming related headlines.

This is all false hype-o-rama.

Google has a variety of products that can make use of location data, both desktop and mobile, and of course there are various kinds of location data in these contexts — IP address location estimates, cell phone location data, etc. So it’s logical that these need to be handled in different ways, and that users have appropriate options for dealing with each of them in different Google services. Google explains in detail how they use this data, the tight protections they have over who can access this data — and they never sell this data to anyone. 

Google pretty much bends over backwards when it comes to describing how this stuff works and the comprehensive controls that users have over data collection and deletion (see: “The Google Page That Google Haters Don’t Want You to Know About” – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about).

Can one argue that Google could make this even simpler for users to deal with? Perhaps, but how to effectively make it all even simpler than it is now in any kind of practical way is not immediately obvious.

The bottom line is that Google gives users immense control over all of this. You don’t need to worry about Google.

What you should be worrying about is the entities out there who gather your location data without your consent or control, who usually never tell you what they’re doing with it. They hoard that data pretty much forever, and use it, sell it, and abuse it in ways that would make your head spin.

A partial list? Your cellular carrier. They know where your phone is whenever it’s on their network. They collect this data in great detail. Turning off your GPS doesn’t stop them — they use quite accurate cell tower triangulation techniques in that case. Most of these carriers (unlike Google, who has very tight controls) have traditionally provided this data to authorities with just a nod and a wink!

Or how about the license plate readers that police and other government agencies have been deploying like mad, all over the country! They know where you drive, when you travel — and they collect this data in most cases with no real controls over how it will be used, how long it will be held, and who else can get their hands on it! You want someone to be worried about, worry about them!

And the list goes on.

It’s great for headlines and clickbait to pound on Google regarding location data, but they’re on the side of the angels in this debate.

And that’s the truth.

–Lauren–

Google Must End Its Silence About Censored Search in China

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

– – –

It has now been more than a week since public reports began surfacing alleging that Google has been working on a secret project — secret even from the vast majority of Googlers — to bring Chinese government-censored Google search and news back to China. (Background info at: “Google Haters Rejoice at Google’s Reported New Courtship of China” – https://lauren.vortex.com/2018/08/03/google-haters-rejoice-at-googles-reported-new-courtship-of-china).

While ever more purported details regarding this alleged effort have been leaking to the public, Google itself has apparently responded to the massive barrage of related inquiries only with the “non-denial denial” that they will not comment on speculation regarding their future plans.

This radio silence has seemingly extended to inside Google as well, where reportedly Google executives have yet to issue a company-wide explanation to the Google workforce, which includes many Googlers who are very concerned and upset about these reports.

With the understanding that it’s midsummer with many persons on vacation, it is still of great concern that Google has gone effectively mute regarding this extremely important and controversial topic. The silence suggests internal management confusion regarding how to deal with this situation. It’s upsetting to Google’s fans, and gives comfort to Google’s enemies.

Google needs to issue a definitive public statement addressing these concerns. Regardless of whether the project actually exists as reports have described — or if those detailed public reports have somehow been false or misleading — Google needs to come clean about what’s actually going on in this context.

Google’s users, employees, and the global community at large deserve no less.

Google, please do the right thing.

–Lauren–

Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

– – –

It’s already happening. Within a day of word that Google is reportedly planning to provide Chinese government-dictated censored search results and censored news aggregation inside China, the Google Haters are already salivating at the new ammunition that this could provide Congress to pillory Google and similarly castrate them around the world — for background, please see: “Censored Google Search for China Would Be Both Evil and Dangerous!” (https://lauren.vortex.com/2018/08/01/censored-google-search-for-china-would-be-both-evil-and-dangerous).

While Google has not confirmed these reports, the mere prospect of their being correct has already brought the righteous condemnation of human rights advocates and organizations around the globe.

And already, in the discussion forums that I monitor where the Google Haters congregate, I’m seeing language like “Godsend!” – “Miracle!” — “We couldn’t have hoped for anything more!”

It’s obvious why there’s such rejoicing in those evil quarters. By willingly allying themselves with the censorship regimes of the Chinese government that are used to repress and torment the Chinese people, Google would put itself in the position of being perceived as the willing pawn of those repressive Chinese Internet policies that have been growing vastly more intense, fanatical, and encompassing over recent years, especially since the rise of “president for life” Xi Jinping.

Already embroiled in antitrust and content management/censorship controversies here in the U.S., the European Union, and elsewhere, the unforced error of “getting in bed” with the totalitarian Chinese government will provide Google’s political and other enemies a whole new line of attack to question Google’s motives and ethical pronouncements. You can already visualize the Google-hating congressmen saying, “Whose side are you on, Google? Why are you helping to support a Chinese government that massively suppresses its own people and continues to commit hacking attacks against us?” We’ll be hearing the word “hypocritical” numerous times during numerous hearings, you can be sure. 

We can pretty well predict Google’s responses, likely to be much the same as they made back in 2006 during their original attempt at “playing nice” with the Chinese censors, an effort Google abandoned in 2010, after escalating demands from China and escalating Chinese hacking attacks.

Google will assert that providing some services — even censored in deeply repressive ways — is better than nothing. They’ll suggest that the censored services that would be provided would help the Chinese citizenry, despite the fact that the very results being censored, while perhaps relatively small in terms of overall percentages, would likely be the very search results that the Chinese people most need to see to help protect themselves from their dictatorial leaders’ information control and massive human rights abuses. Google will note that they already censor some results in countries like France and Germany (for example, there are German laws relating to Nazi-oriented sites).

But narrow removal of search results in functional democracies is one thing The much wider categories of censorship demanded by the Chinese government — a single-party dictatorship that operates vast secret prison and execution networks — is something else entirely. It’s like comparing a pimple with Mt. Everest. 

And that’s before the Chinese start escalating their demands. More items to censor. Access to users’ identity and other private data. Localization of Google servers on Chinese soil for immediate access by authorities.

Worst of all, if Google is willing to bend over and kowtow to the Chinese dictators in these ways, every other country in the world with politicians unhappy with Google for one reason or another will use this as an example of why Google should provide similar governmental censorship services and user data access to their own regulators and politicians. After all, if you’re willing to do this for one of the world’s most oppressive regimes, why not for every country, everywhere?

As someone with enormous respect for Google and Googlers, I can’t view these reports regarding Google and China — if accurate — as anything short of disastrous. Disastrous for Google. Disastrous for their users. Disastrous for the global community of ordinary users at large, who depend on Google’s search honesty and corporate ethics as foundations of daily life.

Joining with China in providing Chinese government-censored search and news results would provide haters and other evil forces around the planet the very ammunition they’ve been waiting for toward crushing Google, towards putting Google under micromanaged government control, toward ultimately converting Google into an oppressive government propaganda machine.

It could frankly turn out much worse for the world than if Google had never been created at all, 20 years ago.

I’m still hoping that these reports are inaccurate in key respects or in their totality. But even if they are correct, then Google still has time to choose not to go down this dark path, and I would strongly urge them not to move forward with any plans to participate in China’s repressive and dangerous totalitarian censorship regime.

–Lauren–

Prediction: Unless Security Keys Are Free, Most Users Won’t Use Them

Various major Internet firms are currently engaged in a campaign to encourage the use of U2F/FIDO security keys (USB, NFC, and now even Bluetooth) to encourage their users to avoid use of other much more vulnerable forms of 2sv (2-factor) login authentication, especially the most common and illicitly exploitable form, SMS text messaging. In fact, Google has just introduced their own “Titan” security keys to further these efforts.

Without getting into technical details, let’s just say that these kinds of security keys essentially eliminate the vulnerabilities of other 2sv mechanisms, and given that most of these keys can support multiple services on a single physical key, you might assume that users would be snapping them up like candy.

You’d be wrong in that assumption.

I’ve spent years urging ordinary users (e.g., of Google services) to use 2sv of any kind. It’s a very, very tough slog, as I noted in:

Google Users Who Want to Use 2-Factor Protections — But Don’t Understand How: https://lauren.vortex.com/2017/06/10/google-users-who-want-to-use-2-factor-protections-but-dont-understand-how

But even beyond that category of users, there’s a far larger group of users who simply don’t see the point with “hassling” to use 2sv at all, resulting in what Google itself has publicly noted is a depressingly low percentage of users enabling 2sv protections.

Beyond logistical issues regarding 2sv that confuse many potential users, there’s a fundamental aspect of human nature involved.

Most users simply don’t believe that THEY are going to be hacked (at least, that’s their position until it actually happens to them and they come calling too late with desperate pleas for assistance).

Frankly, I don’t know of any “magic wand” solution for this dilemma. If you try to require 2sv, you’ll likely lose significant numbers of users who just can’t understand it or give up trying to make it work — bad for you and bad for them. They’re mostly not techies — they’re busy people who depend on your services, who simply do not see any reason why they should be jumping through what they perceive to be more unnecessary hoops — and this means that WE have not explained this all adequately and that OUR systems are not serving them well.

If you blame the users, you’ve already lost the argument.

Which brings us back to those security keys. Given how difficult it is to get most users to enable 2sv at all, how much harder will it be (even if the overall result is simpler and far more secure) to get users to go the security key route when they have to pay real money for the keys?

For many persons, the $20 or so typical for these keys is significant money indeed, especially when they don’t see the value of really having them in the first place (remember, they don’t expect to ever be hacked).

I strongly suspect that beyond “in the know” business/enterprise users, achieving major uptake of security keys among ordinary user populations will require that those keys be provided for free in some manner. Pricing them down to only a few dollars would help, but my gut feeling is that vast numbers of users wouldn’t pay for them at any price, perhaps often because they don’t want to set up payment methods in the first place.

That problem may be significantly reduced where users are already used to paying and have payment methods already in place — e.g. for the Android Play Store. 

But even there, $20 — even $10 — is likely to be a very tough sell for a piece of hardware that most users simply don’t really believe that they need. And if they feel that this purchase is being “pushed” at them as a hard sell, the likely result will be resentment and all that follows from that.

On the other hand, if security keys were free, methodologies such as:

How to “Bribe” Our Way to Better Account Security: https://lauren.vortex.com/2018/02/11/how-to-bribe-our-way-to-better-account-security

might be combined with those free keys to dramatically increase the use of high quality 2sv by all manner of users — including techies and non-techies — which of course should be our ultimate goal in these security contexts.

Who knows? It just might work!

Be seeing you.

–Lauren–

Censored Google Search for China Would Be Both Evil and Dangerous!

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

UPDATE (August 3, 2018): Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 2, 2018): New reports claim that Google is also now working on a news app for China, that would similarly be designed to enable censoring by Chinese authorities. Google has reportedly replied to queries about this with the same non-denial generic statement noted below.

– – –

A report is circulating widely today — apparently based on documents leaked from Google — suggesting that Google is secretly working on a search engine interface (probably initially an Android app) for China that would — by design — be heavily censored by the totalitarian Chinese government. Want to look at a Wikipedia page? Forget it! Search for human rights? No go, and the police are already at your door to drag you off to a secret “re-education” center.

Google has so far not denied the reports, and today has discouragingly only issued generic “we don’t comment on speculation regarding future plans” statements. Ironically, this is all occurring at the same time that Google has been increasing its efforts to promote honest journalism, and to fight against fake news that can sometimes pollute search results.

There’s no way to say this gently or diplomatically: Any move by Google to provide government censored search services to China would not only be evil, but also incredibly dangerous to the entire planet.

The Chinese are wonderful people, but their government is an absolute dictatorship — now with a likely president for life — whose abuse of its own citizens and hacking attempts against the rest of the world have been increasing over recent years. Not getting better, getting far, far worse.

Information control and censorship is at the heart of China’s human rights abuses that include a vast network of secret prisons and undocumented mass executions. Say the wrong thing. Try to look at the wrong webpage. You can just vanish, never to be seen again.

The key to how the Chinese tyrants control their population is the government’s incredibly massive Internet censorship regime, which carefully tailors the information that the Chinese population can see, creating a false view of the world among its citizens — incredibly dangerous for a country that has a vast military and expansionist goals.

Anybody — any firm — that voluntarily participates in the Chinese censorship regime becomes an equal partner in the Chinese government’s evil, no matter attempts to provide benign justifications or explanations.

If this all sounds a bit familiar, it’s because we’ve been over this road with Google before. Back in 2006, I happened to be giving a talk at Google’s L.A. offices the same day that Google announced its original partnership with the Chinese government to provide a censored version of Google. My relevant comments about that are here: 

https://www.youtube.com/watch?v=PGoSpmv9ZVc&feature=youtu.be&t=1448

Later related discussion that same year followed, including:

“Google, China, and Ethics” – https://lauren.vortex.com/archive/000180.html

And then in 2010 when Google wisely terminated their participation in the oppressive Chinese censorship regime:

Bulletin: Google Will No Longer Censor Chinese Search Results — May End China Operations – https://lauren.vortex.com/archive/000667.html

In the ensuing eight years, much has changed with China. They’re even more of a technological powerhouse now, and they’re even more dictatorial and censorship-driven than before. 

All the fears about censored Google search for China that we had back in 2006, including a vast slippery slope of additional dangers to innocent persons both inside and outside of China, are still in force — only now magnified by orders of magnitude.

It obviously must be painful for Google to sit by and watch their less ethical competitors cozy up to Chinese human rights abusing leaders, as those firms suckle at the teats of the Chinese government and its money. 

And in fact, Google has already made some recent inroads with China — with a few harmless apps and shared AI research — all efforts that I generally support in the name of global progress.

But search is different. Very different. Search is how we learn about how the world really works. It’s how we separate reality from lies, how we put our lives and our countries in context with the entire Earth that we all must share. The censorship of search is a true Orwellian terror, since it helps not only to hide accurate information, but by extension promotes the dissemination of false information as well.

It’s bad enough that the European Union forces Google (via the “Right To Be Forgotten”) to remove valid and accurate search results pointing to information that some Europeans find to be personally inconvenient. 

But if reports are correct that Google plans to voluntarily ally itself with Chinese dictators and their wholesale censorship of entire vast categories of crucial information — inevitably in the furtherance of those leaders’ continuing tyrannies — then Google will not only have gone directly and catastrophically against its most fundamental purposes and ideals, but will have set the stage for similar demands for vast Google-enabled mass censorship from other countries around the world.

I’m sorry, but that’s just not the Google that I know and respect.

–Lauren–