Unacceptable: How Google Undermines User Trust by Blocking Users from Their Own Data

UPDATE (November 18, 2016): After much public outcry, Google has now reversed the specific Pixel-related Google account bans noted in this post. Unfortunately, the overall Whose data is it? problem discussed in this post persists, and it’s long since time for Google to appropriately address this issue, which continues to undermine public user trust in a fine company.

– – –

There are times when Google is in the right. There are times when Google is in the wrong. By far, they’re usually on the angels’ side of most issues. But there’s one area where they’ve had consistent problems dating back for years: Cutting off users from those users’ own data when there’s a dispute regarding Google Account status.

A new example of this recurring problem — an issue about which I’ve heard from large numbers of Google users over time — has just surfaced. In this case, it involves the reselling of Google Pixel phones in a manner that apparently violates the Google Terms of Service, with the result that a couple of hundred users have reportedly been locked out of their Google accounts and all of their data, at least for now. 

This means that they’re cut off from everything they’ve entrusted to Google — mail, documents, photos, the works.

Here and now, I’m not going to delve into the specifics of this case — I don’t know enough of the details yet. The entire area of Google accounts suspension, closure, recovery, and so on is complex to say the least. Most times (but not always) users are indeed at fault — one way or another — when these kinds of events are triggered. And the difficulty of successfully appealing a Google account suspension or closure has become rather legendary.

Even recovering a Google account due to the loss of a password can be difficult if you haven’t taken proactive steps to aid in that process ahead of time —steps that I’ve previously discussed in detail.

But the problem of what happens to users’ data when they can’t access their accounts — for whatever reasons — is something that I’ve personally been arguing with Google about literally for years, without making much headway at all.

Google has excellent mechanisms for users to download their data while they still have account access. Google even has systems for you to specify someone else who would have access to your account in case of emergency (such as illness or accident), and policies for dealing with access to accounts in case of the death of an account holder.

The reality though, is that users have been taught to trust Google with ever more data that is critical to their lives, and most people don’t usually think about downloading that data proactively.

So when something goes wrong with their account, and they lose access to all of that data, it’s like getting hit with a ton of bricks.

Again, this is not to say that users aren’t often — in fact usually — in the wrong (at least in some respect) when it comes to account problems. 

But unless there is a serious — and I mean serious (like child porn, for example) criminal violation of law — my view is that in most circumstances users should have some means to download their data from their account even if it has been suspended or closed for good cause. 

If they can’t use Google services again afterwards, them’s the breaks. But it’s still their own data we’re talking about, not Google’s.

Google has been incredibly resistant to altering this aspect of their approach to user account problems. I am not ignorant of their general reasoning in this category of cases — but I strongly believe that they are wrong with their essentially one size fits all “death penalty” regime in this context.

Nobody is arguing that there aren’t some specific situations where blocking a violating user (or a user accused of violations) from accessing their data on Google services is indeed justified. But Google doesn’t seem to have any provisions for anything less than total data cutoff when there’s a user account access issue, even when significant relevant legal concerns are not involved.

This continuing attitude by Google does not engender user trust in Google’s stewardship of user data, even though most users will never run afoul of this problem.

These kinds of actions by Google provide ammunition to the Google Haters and are self-damaging to a great firm and the reputation of Googlers everywhere, some of whom have related to me their embarrassment at trying to explain such stories to their own friends and families.

Google must do better when it come to this category of user account issues. And I’ll keep arguing this point until I’m blue in the face and my typing fingertips are bruised. C’mon Google, please give me a break!

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Crushing the Internet Liars

Frankly, this isn’t the post that I had originally intended. I had a nearly completed blog draft spinning away happily on a disk, a draft that presented a rather sedate, scholarly, and a bit introspective discussion of how Internet-based communications evolved to reach the crisis point we now see regarding misinformation, filter bubbles, and so-called echo chambers in search and social media.

I just trashed that draft. Bye!

Numerous times over the years I’ve tried the scholarly approach in various postings regarding the double-edged sword of Internet personalization systems — capable of bringing both significant benefits to users but also carrying significant and growing risks.

Well, given where we stand today after the 2016 presidential election, it appears that I might have just as well been doing almost anything else rather than bothering to write that stuff. Toenail trimming would have likely been a more fruitful use of my time.

So now — today — we must deal with this situation while various parties are hell-bent toward turning the Internet into a massive, lying propaganda machine to subvert not only the democratic process, but our very sense of reality itself.

Much of this can be blamed on the concept of “false equivalency” — which runs rampant on cable news, mainstream Internet news sites, and throughout social media such as Facebook (which is taking the brunt of criticism now — and rightly so), plus on other social media ecosystems.

Fundamentally, this idea holds that even if there is widespread agreement that a particular concept is fact, you are somehow required to give “equal time” to wacko opposing views.

This is why you see so much garbage prominently surfaced from nutcases like Alex Jones — who believes the U.S. government blew up the World Trade Center buildings — or Donald Trump and his insane birther attacks on Obama, that Trump used to jump-start his presidential campaign. It doesn’t take more than half a brain to know that such statements are hogwash.

To be sure, it’s difficult to really know whether such perpetually lying creatures actually believe what they’re saying — or are simply saying outrageous things as leverage for publicity. In the final analysis though, it doesn’t much matter what their motives really are, since the damage done publicly is largely equivalent either way.

The same can be said for the wide variety of fake postings and fake news sites that increasingly pollute the Net. Do they believe what they say, or are they simply churning out lies on a twisted “the end justifies the means” basis? Or are they just sick individuals promoting hate (often racism and antisemitism) and chaos? No doubt all of the above apply somewhere across the rapidly growing range of offenders, some of whom are domestic in nature, and some who are obviously operating under the orders of foreign leaders such as Russia’s Putin.

Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.

And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.

Facebook’s Mark Zuckerberg is publicly denying that Facebook has a significant role in the promotion of lies. He denies that Facebook’s algorithms for controlling which postings users see creates echo chambers where users only see what they already believe, causing lies and distortions to spread ever more widely without truth having a chance to invade those chambers. But Facebook’s own research tells a very different story, because Facebook insists that exactly those kinds of controlling effects occur to the benefit of Facebook’s advertisers.

Yet this certainly isn’t just a Facebook problem. It covers the gamut of social media and search.

And the status quo can no longer be tolerated.

So where do we go from here?

Censorship is not a solution, of course. Even the looniest of lies, assuming the associated postings are not actually violating laws, should not be banned from visibility.

But there is a world of difference between a lying post existing, vis-a-vis the actual widespread promotion of those lies by search and social media.

That is, simply because a lying post is being viewed by many users, there’s no excuse for firms’ algorithms to promote such a post to a featured or other highly visible status, creating a false equivalency of legitimacy by virtue of such lies being presented in close proximity to actual facts.

This problem becomes particularly insidious when combined with personalization filter bubbles, because the true facts are prevented from penetrating users’ hermetically sealed social media worlds that have filled with false postings.

And it gets worse. Mainstream media in a 24/7 news cycle is hungry for news, and all too often now, the lies that germinate in those filter bubbles are picked up by conventional media and mainstream news sites as if they were actual true facts. And given the twin realities of reduced budgets and beating other venues to the punch, such lies frequently are broadcast by such sites without any significant prior fact checking at all.

So little by little, our sense of what is actually real — the differences between truth and lies — becomes distorted and diluted.

Again, censorship is not the answer.

My view is that more information — not less information — is the path toward reducing the severity of these problems.

Outright lies must not continue to be given the same untarnished prominence as truth in search results and in widely seen social media postings.

There are multiple ways to achieve this result.

Lying sites in search results can be visibly and prominently tagged as such in those results, be downranked, or both. Similar principles can apply to widely shared social media posts that currently are featured and promoted by social media sites primarily by virtue of the number of persons already viewing them. Because — lets face it — people love to view trash. Lots of users viewing and sharing a post does not make it any less of a lie.

As always, the devil is in the details.

This will be an enormously complex undertaking, involving technology, policy, public relations, and the law. I won’t even begin to delve into the details of all this here, but I believe that with sufficient effort — effort that we must now put forth — this is a doable concept.

Already, whenever such concepts are brought up, you quickly tend to hear the refrain: “Who are you to say what’s a fact and what’s a lie?”

To which I reply: “To hell with false equivalences!”

Man landed on the moon. Obama was born in Hawaii. Terrorists destroyed the World Trade Center with jet aircraft. Hillary Clinton never said she was going to abolish the Second Amendment. Donald Trump did say that he supported the war in Iraq. Denzel Washington did not say that he supported Trump. On and on and on.

There are a virtually endless array of stated facts that reasonable people will agree are correct. And if the nutcases want to promote their own twisted views on these subjects that’s also fine — but those postings should be clearly labeled for what they are — not featured and promoted. As the saying goes, they’re free to have their own opinions, but not their own facts.

Obviously, this leaves an enormous range of genuinely disputed issues where the facts are not necessarily clear, often where only opinion and/or philosophy really apply. That’s fine too. They’re out of scope for these discussions or efforts.

But the outright Internet liars must be crushed. They shouldn’t be censored, but they must no longer be permitted to distort and contaminate reality by being treated on an equal footing with truth by major search and social media firms.

We built the Internet juggernaut. Now it’s our job to fix it where it’s broken.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Elections and the Internet “Echo Chambers”

Back in a 2010 blog post, I noted the kinds of “echo chamber” effects that can result from personalization and targeting of various types of information on the Web. That particular posting concentrated on search personalization, but also noted the impact on Internet-based discussions, a situation that has become dramatically more acute with the continuing rise of social media. Given the current controversies regarding how “filter bubbles” and other algorithmically-driven information surfacing and restriction systems may impact users’ views and potentially increase political and religious radicalization — particularly in relation to the 2016 elections here in the USA — I believe it is relevant to republish that posting today, which is included below. Also of potential interest is my recently reposted item related to Internet fact checking.

Search Personalization: Blessing and Trap?
(Original posting date: September 16, 2010)

Greetings. Arguably the holy grail of search technology — and of many other aspects of Internet-based services today, is personalization. Providing users with personalized search suggestions, search results, news items, or other personalized services as quickly as possible, while filtering out “undesired” information, is a key focus not only of Google but of other enterprises around the world.

But does too much reliance on personalization create an “echo chamber” effect, where individuals are mainly (or perhaps totally) exposed to information that only fits their predetermined views? And if so, is this necessarily always beneficial to those individuals? What about for society at large?

Diversity of opinions and information is extremely important, especially today in our globally interconnected environment. When I do interviews on mainstream radio programs about Internet issues, it’s usually on programs where the overall focus is much more conservative than my own personal attitudes. Yet I’ve found that even though there’s often a discordance between the preexisting views of most listeners and my own sentiments, I typically get more insightful questions during those shows than in the venues where I spend most of my time online.

And one of the most frequent questions I get afterwards from listeners contacting me by email is: “How come nobody explained this to me that way before?”

The answer usually is that personalized and other limited focus information sources (including some television news networks) never exposed those persons to other viewpoints that might have helped them fully understand the issues of interest.

An important aspect of search technology research should include additional concentration on finding ways to avoid potential negative impacts from personalized information sources — particularly when these have the collateral effect of “shutting out” viewpoints, concepts, and results that would be of benefit both to individuals and to society.

Overall, I believe that this is somewhat less of a concern with “direct” general topic searches per se, at least when viewed as distinct from search suggestions. But as suggestions and results become increasingly commingled, this aspect also becomes increasingly complex. (I’ve previously noted my initial concerns in this respect related to the newly deployed Google Instant system).

Suggestions would seem to be an area where “personalization funneling” (I may be coining a phrase with this one) would be of more concern. And in the world of news searches as opposed to general searches, there are particularly salient related issues to consider (thought experiment: if you get all of your information from FOX News, what important facts and contexts are you probably missing?)

While there are certainly many people who (for professional or personal reasons) make a point to find and cultivate varied and opposing opinions, not doing so becomes much easier — and seemingly more “natural” — in the Internet environment. At least the possibility of serendipitous exposure to conflicting points of view was always present when reading a general audience newspaper or magazine, for example. But you can configure many Web sites and feeds to eliminate all but the narrowest of opinions, and some personalization tools are specifically designed to enhance this effect.

As our search and related tools increasingly focus on predicting what we want to see and avoiding showing us anything else (which naturally enough makes sense if you want to encourage return visits and show the most “attractive” ads to any given individual), the funneling effect of crowding out other materials of potential value appears to be ever more pronounced.

Add to that the “preaching to the choir” effect in many Internet discussions. True, there are forums with vibrant exchanges of views and conflicting opinions. But note how much of our Twitter and Buzz feeds are depressingly dominated by a chorus of “Attaboy!” yells from “birds of a feather” like-minded participants.

I am increasingly concerned that technologically-based Internet personalization — despite its many extremely positive attributes — also carries with it the potential for significant risks that are apparently not currently receiving the research and policy attention that they deserve.

If we do choose to assign some serious thinking to this dilemma, we certainly have the technological means to adjust our race toward personalization in ways that would help to balance out the equation.

This definitely does not mean giving up the benefits of personalization. However, we can choose to devote some of the brainpower currently focused on figuring out what we want to see, and work also toward algorithms that can help determine what we need to see.

In the process, this may significantly encourage society’s broader goals of cooperation and consensus, which of necessity require — to some extent at least — that we don’t live our entire lives in confining information silos, ironically even while we’re surrounded by the Internet’s vast buffet of every possible point of view.

 – – –

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Home Will Change the World


Much has recently been written about Google Home, the little vase-like cylinder that started landing in consumers’ hands only a week or so ago. Home’s mandate sounds simple enough in theory — listen to a room for commands or queries, then respond by voice and/or with appropriate actions.

What hasn’t been much discussed however, is how the Home ecosystem is going to change for the better the lives of millions to billions of people over time, in ways that most of us couldn’t even imagine today. It will drastically improve the lives of vast numbers of persons with visual and/or motor impairments, but ultimately will dramatically and positively affect the lives of everyone else as well.

Home isn’t the first device to offer this technology segment — nor is it the least expensive — Amazon came earlier and has a more limited version that is cheaper than Home (and a model more expensive than Home as well).

But while Amazon’s device seems to have been designed with buying stuff on Amazon as its primary functionality, Google’s Home — backed by Google’s enormously more capable corpus of information, accurate speech recognition, and AI capabilities, stands to quickly evolve to far outpace Amazon’s offering along all vectors.

This is a truth even if we leave aside the six-month free subscription to Google’s excellent ad-free “YouTube Red/Google Play Music” — which Google included with my Home shipment here in the USA, knowing that once you’ve tasted the ability to play essentially any music and any YouTube videos at any time just by speaking to the air, you’ll have a difficult time living without it. I’ve had Home for a week and I’m finally listening to great music of all genres again — I know that I’ll be subscribing when my free term to that package runs out.

You can dig around a bit and easily find a multitude of reviews that discuss specifics of what Home does and how you use it, so I’m not going to spend time on that here, other than to note that like much advanced technology that is simple to operate, the devilishly complex hardware and software design aspects won’t be suspected or understood by most users — nor is there typically a need for them to do so.

But what I’d like to ponder here is why this kind of technology is so revolutionary and why it will change our world.

Throughout human history, pretty much any time you wanted information, you had to physically go to it in one way or another. Dig out the scroll. Locate the book. Sit down at the computer. Grab the smartphone.

The Google Home ecosystem is a sea change. It’s fundamentally different in a way that is much more of a giant leap than the incremental steps we usually experience with technology.

Because for the first time in most of our experiences, rather than having to go to the information, the information is all around us, in a remarkably ambient kind of way.

Whether you’re sitting at a desk at noon or in bed sleepless in the middle of the night, you have but to verbally express your query or command, and the answers, the results, are immediately rendered back to you. (Actually, you first speak the “hotword” — currently either “Hey Google” or “OK Google” — followed by your command or query. Home listens locally for the hotword and only sends your following utterance up to Google for analysis when the hotword triggers — which is also indicated by lights on the Home unit itself. There’s also a switch on the back of the device that will disable the microphone completely.)

It’s difficult to really express how different this is from every other technology-based information experience. In a matter of hours of usage, one quickly begins to think of Home as a kind of friendly ethereal entity at your command, utterly passive until invoked. It becomes very natural to use — the rapid speed of adaptation to using Home is perhaps not so remarkable when you consider that speech is the human animal’s primary evolved mode of communications. Speech works with other humans, to some extent with our pets and other animals — and it definitely works with Google Home.

Most of the kinds of commands and queries that you can give to Home can also be given to your smartphone running Google’s services — in fact they both basically access the same underlying “Google Assistant” systems.

But when (for example) information and music are available at any time, at the spur of the moment, for any need or whim — just by speaking wherever you happen to be in a room and no matter the time of day — it’s really an utterly different emotional effect.

And it’s an experience that can easily make one realize that the promised 21st century really has now arrived, even if we still don’t have the flying cars.

The sense of science fiction come to life is palpable.

The Google teams who created this tech have made no secret of the fact that the computers of “Star Trek” have been one of their key inspirations.

There are various even earlier scifi examples as well, such as the so-called “City Fathers” computers in James Blish’s “Cities in Flight” novels. 

It’s obvious how Google Home technology can assist the blind, persons with other visual impairments, and a wide variety of individuals with mobility restrictions.

Home’s utility in the face of simple aging (and let’s face it, we’re all either aging or dead) is also immense. As I noted back in As We Age, Smartphones Don’t Make Us Stupid — They’re Our Saviors, portable information aids can be of great value as we get older.

But Home’s “always available” nature takes this to an entirely new and higher level.

The time will come when new homes will be built with such systems designed directly into their walls, and when people may feel a bit naked in locations where such capabilities are not available. And in fact, in the future this may be the only way that we’ll be able to cope with the flood of new and often complex information that is becoming ever more present in our daily lives.

Perhaps most telling of all is the fact that these systems — as highly capable as they are right now — are only at the bare beginnings of their evolution, an evolution that will reshape the very nature of the relationship between mankind and access to information.

If you’re interested in learning more about all this, you’re invited to join my related Google+ Community which is covering a wide range of associated topics.

Indeed — we really are living in the 21st century!

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!