Does Google Hate Old People?

Originally posted February 11, 2016. Reposted today after a weekend of struggling to support a variety of older and not-so-old users via Chrome Remote Desktop.

– – –

No. Google doesn’t hate old people. I know Google well enough to be pretty damned sure about that.

Is Google “indifferent” to old people? Does Google simply not appreciate, or somehow devalue, the needs of older users?

Those are much tougher calls.

I’ve written a lot in the past about accessibility and user interfaces. And today I’m feeling pretty frustrated about these topics. So if some sort of noxious green fluid starts to bubble out from your screen, I apologize in advance.

What is old, anyway? Or we can use the currently more popular term “elderly” if you prefer — six of one and half a dozen of another, really.

There are a bunch of references to “not wanting to get old” in the lyrics of famous rock stars who are now themselves of rather advanced ages. And we hear all the time that “50 is the new 30” or “70 is the new 50” or … whatever.

The bottom line is that we either age or die.

And the popular view of “elderly” people sitting around staring at the walls — and so rather easily ignored — is increasingly a false one. More and more we find active users of computers and Internet services well into their 80s and 90s. In email and social media, many of them are clearly far more intelligent and coherent than large swaths of users a third their age.

That’s not to say these older users don’t have issues to deal with that younger persons don’t. Vision and motor skill problems are common. So is the specter of memory loss (that actually begins by the time we reach age 20, then increases from that point onward for most of us).

Yet an irony is that computers and Internet services can serve as aids in all these areas. I’ve written in the past of mobile phones being saviors as we age, for example by providing an instantly available form of extended memory.

But we also are forced to acknowledge that most Internet services still only serve older persons’ needs seemingly begrudgingly, failing to fully comprehend how changing demographics are pushing an ever larger proportion of their total users into that category — both here in the U.S. and in many other countries.

So it’s painful to see Google dropping the ball in some of these areas (and to be clear, while I have the most experience with the Google aspects of these problems, these are actually industry-wide issues, by no means restricted to Google).

This is difficult to put succinctly. Over time these concerns have intertwined and combined in ways increasingly cumbersome to tease apart with precision. But if you’ve every tried to provide computer/Internet technical support to an older friend or relative, you’ll probably recognize this picture pretty quickly.

I’m no spring chicken myself. But I remotely provide tech support to a number of persons significantly older — some in their 80s, and more than one well into their 90s.

And while I bitch about poor font contrast and wasted screen real estate, the technical problems of those older users are typically of a far more complex nature.

They have even more trouble with those fonts. They have motor skill issues making the use of common user interfaces difficult or in some cases impossible. Desktop interfaces that seem to be an afterthought of popular “mobile first” interface designs can be especially cumbersome for them. They can forget their passwords and be unable to follow recovery procedures successfully, often creating enormous frustration and even more complications when they try to solve the problems by themselves. The level of technical lingo thrown at them in many such instances — that services seem to assume everyone just knows — only frustrates them more. And so on.

But access to the Net is absolutely crucial for so many of these older users. It’s not just accessing financial and utility sites that pretty much everyone now depends upon, it’s staying active and in touch with friends and relatives and others, especially if they’re not physically nearby and their own mobility is limited.

Keeping that connectivity going for these users can involve a number of compromises that we can all agree are not keeping with ideal or “pure” security practices, but are realistic necessities in some cases nonetheless.

So it’s often a fact of life that elderly users will use their “trusted support” person as the custodian of their recovery and two-factor addresses, and of their primary login credentials as well.

And to those readers who scream, “No! You must never, ever share your login credentials with anyone!” — I wish you luck supporting a 93-year-old user across the country without those credentials. Perhaps you’re a god with such skills. I’m not.

Because I’ve written about this kind of stuff so frequently, you may by now be suspecting that a particular incident has fired me off today.

You’d be correct. I’ve been arguing publicly with a Google program manager and some others on a Chrome bug thread, regarding the lack of persistent connection capability for Chromebooks and Chromeboxes in the otherwise excellent Chrome Remote Desktop system — a feature that the Windows version of CRD has long possessed.

Painfully, from my perspective the conversation has rapidly degenerated into my arguing against the notion that “it’s better to flush some users down the toilet than violate principles of security purity.”

I prefer to assume that the arrogance suggested by the “security purity” view is one based on ignorance and lack of experience with users in need, rather than any inherent hatred of the elderly.

In fact, getting back to the title of this posting, I’m sure hatred isn’t in play.

But of course whether it’s hatred or ignorance — or something else entirely — doesn’t help these users.

The Chrome OS situation is particularly ironic for me, since these are older users whom I specifically urged to move to Chrome when their Windows systems were failing, while assuring them that Chrome would be a more convenient and stable experience for them.

Unfortunately, these apparently intentional limitations in the Chrome version of CRD — vis-a-vis the Windows version — have been a source of unending frustration for these users, as they often struggle to find, enable, and execute the Chrome version manually every time they need help from me, and then are understandably upset that they have to sit there and refresh the connection manually every 10 minutes to keep it going. They keep asking me why I told them to leave Windows and why I can’t fix these access problems that are so confusing to them. It’s personally embarrassing to me.

Here’s arguably the saddest part of all. If I were the average user who didn’t have a clue of how Google’s internal culture works and of what great people Googlers are, it would be easy to just mumble something like, “What do you expect? All those big companies are the same, they just don’t care.”

But that isn’t the Google I know, and so it’s even more frustrating to me to see these unnecessary problems continuing to persist and fester in the Google ecosystem, when I know for a certainty that Google has the capability and resources to do so much better in these areas.

And that’s the truth.

–Lauren–

Here’s Where Google Hid the SSL Certificate Information That You May Need

UPDATE (December 2, 2017): Easy Access to SSL Certificate Information Is Returning to Google’s Chrome Browser

– – –

Google has a great security team, so it’s something of a head-scratcher when they misfire. Or should we be wondering if the Chrome user interface crew had enough coffee lately?

Either way, Google Chrome users have been contacting me wondering why they no longer could access the detailed status of Chrome https: connections, or view the organization and other data associated with SSL certificates for those connections.

Up to now for the stable version of Chrome, you simply clicked the little green padlock icon on an https: connection, clicked on the “Details” link that appeared, and a panel then opened that gave you that status, along with an obvious button to click for viewing the actual certificate data such as Organization, issuance and expiration dates, etc.

Suddenly, that “Details” link no longer is present. Seemingly, Google just doesn’t feel that “ordinary” users need to look at that data these days.

I beg to differ. I’ve frequently trained “ordinary” users to check that information when they question the authenticity of an https: connection — after all, crooks can get SSL certificates too, so verifying the certificate issuance-related data often makes sense.

Well, it turns out that you can still get this information from Chrome, but apparently Google now assumes that folks are so clairvoyant that they can figure out how to do this through the process of osmosis — or something.

The full certificate data is available from the “Developers tools” panel under the “Security” label. In fact, that’s where this info has been for quite some time, but since the now missing “Details” link took you directly to that panel, most users probably didn’t even realize that they were deep in the Developers tools section of the browser.

To get the certificate data now, here’s what you need to do. 

First, get into Developer tools. You can do this via Chrome’s upper-right three vertical dots, then click “More tools” — then “Developer tools” — or on many systems you can just press the F12 button.

But wait, there’s still more (yeah, Google took a simple click in an intuitive place and replaced it with a bunch of clicks scattered around).

Once the panel opens, look up at its top. If you don’t see the word “Security” already, click on the “>>” to the right of “Console” — then look down the list that appears and click on “Security” — which will open the Security panel with all of the certificate-related goodies. When you’re done there, click the big “X” in the upper right of the panel to return to normal browser operations.

And don’t feel too badly if you didn’t figure all of this out for yourself. Even Houdini might have had problems with this one.

–Lauren–

The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes

I hate writing blog posts like this. I really do. I’m a big fan of Google. They’ve got many of the most skilled and caring employees in tech. Unfortunately, they’re not immune to being caught up in abysmal industry trends, so I’m forced to write another “Here we go again …” piece. Sigh.

I’ve been using Google Voice since pretty much the day it launched. Over the years since then I’ve come to depend upon it for both my personal and business phone calls inbound and outbound. Google Voice has been extremely functional, utterly reliable, and godsend for people like me who must deal with complex mixes of cellular and landline phones, lots of inbound spam calls to burn, and need this level of call management to help free up the time necessary for making inflammatory Google+ posts. That Google Voice is free for all domestic calls is a bonus, but I’d willingly pay reasonable fees to use it.

The Google Voice (henceforth “GV”) desktop/web interface has been very stable for something like five years now. In one sense that’s a good thing. It works well, it accomplishes its purpose. Excellent.

On the other hand, if you know Google, you know that when one of their products doesn’t seem to be updated much, it might be time to start being afraid. Very afraid. Because Google products that seem “too” stable may be on the path to decimation and death.

Let’s face it, an ongoing problem in the Internet world is that skilled software engineers by and large aren’t enthusiastic about maintaining what are seen to be “old” products. It’s not considered conducive to climbing the promotion ladder at most firms — the “sexy” new stuff is where the bigger bucks are perceived to reside.

So as desktop GV continued along its stable path, many observers began to wonder if Google was preparing to pull its plug. I’ve had those concerns too, though somewhat mitigated by the fact that Google has been integrating aspects of GV into some of their other newer products, which suggested that GV still had significant life ahead.

This was confirmed recently when word started to circulate of a new version (“refresh” is another term used for this) of GV that was soon to roll out to users. Google eventually confirmed this. Indeed, it’s rolling out right now.

And for desktop users at least, it’s a nightmare. A nightmare that in fact I was expecting. I had hoped I’d be wrong. Unfortunately, I was correct.

I probably don’t even really need to describe the details, because you’ve likely seen this happen to other Google products of late (including recently Google Wallet, though the impact of GV is orders of magnitude worse for users who need to interact with GV frequently throughout the day).

Once again, Google is on the march to treat large desktop displays as if they were small smartphone screens.

Legacy GV made excellent use of screen space — making it easy to see all call details, full voicemail transcriptions, and everything else you needed — all in clear and easy to read fonts.

The new GV is another wasted space, low contrast slap in the face of desktop users, especially those with less than perfect vision (whether due to naturally aging eyes or any other reason).

Massive amounts of unused white space. Call histories squished into a little smartphone style column (no way to increase its size that I could find so far), causing visible voicemail transcriptions to be truncated to just a few words. Plus we’re “treated” to the new Google standard low contrast “if you don’t have perfect vision we don’t care about you” fonts, that disrupt the entire user interface when you try to zoom them up.

And so on. Need I say more? You already know the drill.

There is one saving grace in the new desktop GV. For the moment, there’s a link that takes you back to legacy GV. In fact, after reverting one of my accounts that way, I didn’t even see an obvious way to get back to the new GV interface. In any case, we can safely assume that the legacy access is only temporary.

Compared to legacy desktop GV that worked great, the new GV is another painful sign that Google just doesn’t care about users who don’t live 100% of the time on smartphones and/or have perfect vision. Yet this maligned demographic is rapidly growing in size.

It’s increasingly difficult to not consider the end results of these changes in Google products to be a form of discrimination. I don’t believe that they’re actually intended as discrimination — but the outcomes are the same irrespective of motives. And frankly, my view is that in the long run this is a very dangerous and potentially self-destructive path for Google to be taking.

Nobody would demand that innovation and product improvements must stop. But we are far beyond the point where we should have come to the realization that “one size fits all” user interfaces are simply no longer tenable in these environments, unless you’re willing to simply write off large numbers of users who may not be in your primary target demographic, but still represent many millions of human beings who depend upon you.

Ignoring the needs of these users is not right. It’s not fair. It’s not ethical.

It’s just not Googley. Or at least, it shouldn’t be.

–Lauren–

User Trust Fail: Google Chrome and the Tech Support Scams

I act as the volunteer support structure for a significant number of nontechnical — but quite active — Internet users. Some of these are quite elderly, which makes me quite sensitive to where Internet firms are falling down on the job in this context. 

Let’s face it, these firms may pay lip service to accessibility and serving all segments of their users, but in reality they typically tend to care very little about users who aren’t in their key sales demographics, and who (while often numbering in the millions or more) aren’t considered to be their “primary” users of interest.

We see this problem across a number of aspects (I’ve in the past frequently noted the problems of illegible fonts and poor user interface designs, as my regular readers well know).

But today I’d like to focus on just one, where Google really needs to more aggressively protect their users from some of the most dangerous criminals on the Internet.

I’m referring to the ubiquitous “tech support” scams (often based in India) that terrify users by appearing on their browsers — often the result of a contaminated site link, a “cold” phone call, or very often a mistyped URL — who then falsely claim that the user’s computer is infected with malware or somehow broken, that you must click HERE for a fix, or you must immediately call THIS 800 number, and BLAH BLAH BLAH.

The vast majority of these follow a common pattern, usually claiming to be a legit tech support firm or often Microsoft itself. 

Once users are pushed into contacting the scammers — who typically focus on Windows computers — the usual pattern is for them to walk the unsuspecting user through the installation of a remote access program, so that the scammer has free reign to suck the user’s credit card and bank accounts dry via a variety of crooked procedures. Their methods are typically tuned especially well to take advantage of elderly, nontechnical users.

It’s not Google’s fault that these criminals exist. However, given Google’s excellent record at detection and blocking of malware, it is beyond puzzling why Google’s Chrome browser is so ineffective at blocking or even warning about these horrific tech support scams when they hit a user’s browser.

These scam pages should not require massive AI power for Google to target.

And critically, it’s difficult to understand why Chrome still permits most of these crooked pages to completely lock up the user’s browser — often making it impossible for the user to close the related tab or browser through most ordinary means that most users could reasonably be expected to know about.

The simplest cure to offer in these situations (especially when you’re trying to help someone on the other side of the country over the phone) is to tell them to reboot (if the user isn’t already so flustered that they’re having trouble doing that) or to power cycle the computer completely (with the non-zero risk of disk issues that can result from sudden shutdowns). 

Even after that, users need to know that they must refuse Chrome’s “helpful” offer of restoring the old tabs after the reset — otherwise they can easily find themselves locked into the offending page yet again!

Chrome is now the world’s most popular browser, and Google’s Chrome team is top-notch. I am confident that they could relatively quickly solve these problems, if they deemed it a priority to do so.

For the sake of helping to protect their users from support scams — even though these users are often in demographic categories that Google doesn’t seem to really care that much about — I urge Google to take immediate steps to make it much more difficult for the tech support criminals to leverage the excellent Chrome browser for evil purposes.

–Lauren–

The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

IETF’s Stunning Announcement: Emergency Transition to IPv7 Is Necessary!

Frostbite Falls, Minn. (NOTAP) In a brief announcement today that stunned Internet users around the world, the Internet Engineering Technical Force proclaimed the need for an “emergency” transition to a yet to be designed “IP version 7” protocol, capable of dealing with numeric values up to “a full gazillion at a minimum.”

IETF spokesman David Seville explained why this drastic move was considered necessarily when the ongoing transition from IPv4 to Internet protocol level IPv6 — the latter with a vast numbering capability — is still far from complete.

“Frankly, we’re just trying to get ahead of the curve, for once in the technology field,” said Mr. Seville. “With the dramatic rise in the number of hate speech and fake news sites around the world — not only originating in the Soviet Uni … I mean, Russia — we can’t risk running out of numbering resources ever again! Everyone deserves to be able to get these numbers, no matter how vile, racist, and sociopathic they may be. We’re already getting complaints regarding software systems that have overflowed available variable ranges simply trying to keep track of Donald Trump’s lies.”

Asked how the IETF planned to finance their outreach regarding this effort, Seville suggested that they were considering buying major ad network impressions on racist fake news sites like Breitbart, where “the most gullible Internet users tend to hang out. If anyone will believe the nonsense we’re peddling, they will!”

In answer to a question regarding the timing of this proposed transition, Seville noted that the IETF planned to follow the GOP’s healthcare leadership style. “We feel that IPv4 and IPv6 should be immediately repealed, and then we can come up with the IPv7 replacement later.” When asked if this might be disruptive to the communications of Internet users around the world, Mr. Seville chuckled “You’re catching on.”

David Seville can be reached directly for more information at his voice phone number: +7 (495) 697-0349.

– – –

–Lauren–

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

My Mock-Up for Labeling Fake News on Google Search

Here is my mock-up of one way to label fake news on Google Search Results Pages, in the style of the Google malware site warnings. The warning label link would go to a help page explaining the methodology of the labeling.

 

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Biting the Bullet: It’s Time to Require 2-Factor Verified Logins

For years now, security and privacy professionals — myself included — have been urging the use of 2-factor authentication (aka 2sv, 2-step authentication, 2fa, multiple factor, etc.) systems for logging into Web and other computer-based portals. Regardless of the name, these authentication systems all leverage the same basic principle — to gain access requires “something you know” and “something you have” — broadly defined. (And by the way, the inane and insecure concept of “security questions” doesn’t satisfy the latter category!)

The fundamental point is that these systems require the provision of additional information beyond the traditional username and password pair that have long demonstrated their frail natures as used by most persons.

Even if you don’t engage in notably bad password practices like sharing them among sites or laughingly weak password choices, usernames and passwords alone are incredibly vulnerable to basic phishing attacks that attempt to convince you to enter these credentials into (often very convincing) faked login pages. 

The lack of widespread adoption of 2-factor systems has been the gift that keeps on giving to crooks, scam artists, Russian dictators, and a long list of other lowlife scum. The result has been what seems like almost daily reports of system penetrations and data thefts.

Are 2-factor systems foolproof? No. There are a wide range of technologies and methodologies that can be used to implement these systems, and they vary significantly in theoretical and practical security effectiveness. But despite some critics, they all share one thing in common — they’re all much better than just a bare username and password alone!

Choices for 2-factor systems include text messages, automated voice calls, standalone authentication apps and devices, USB/NFC (e.g. FIDO U2F) crypto keys, and even printable key codes. And more.

With all of these choices, why is there so comparatively little uptake of 2-factor systems in the consumer sphere (in the corporate sphere there has been more, but not nearly enough there either).

Why don’t most users take advantage of 2-factor systems? There are two primary, interrelated reasons.

First is the psychology of the problem. Most people just don’t believe in their gut that a breach is going to happen to them — they feel it’s always going to be someone else. They just don’t want to “hassle” with anything additional to protect themselves, no matter how frequently we urge the use of 2-factor.

It’s much the same kind of “it won’t be me” reasoning that leads most people to not appropriately backup the data on their home (or often their office) systems.

Of course, once their account is breached or their disk crashes, they suddenly care very deeply about these issues, and people like me get those 3 AM calls where we have to bite our tongues to avoid saying “Well, I told you so.”

However, it would be unfair to blame the users entirely in this context, because — truth be told — many 2-factor implementations suck (that’s a computer science technical term, by the way) and are indeed a genuine hassle to use.

Some require the use of text messages (not everyone has a text message capable phone, as the Social Security Administration learned in their incompetent recent aborted attempt to require 2-factor authentication). Some require that you receive a new authentication token every time you login (overkill for most ordinary consumers) — rather than remembering that a given device has already been authenticated for a span of time. Some are slow. Some are buggy. Some screw up and lock users out of their accounts.

The bottom line is that a lousy 2-factor system is going to drive users batty.

But that’s not an excuse, because it is possible to do 2-factor in a correct and user-friendly manner, with appropriate choices for consumer and business/organization requirements.

By far the best 2-factor implementation I know of is Google’s. Their world class privacy/security teams have for years now been deploying 2-factor with the full range of choices and options I noted above. This is the way it should be done.

Yet even Google has to deal with the “it won’t happen to me” mindset syndrome on the part of users.

This is why I am now convinced that at least the major Web firms must begin moving gradually toward the mandatory use of 2-factor methods for users accessing these sites.

Just as responsible websites won’t permit a user to create an account without a password, and many attempt to prevent users from selecting incredibly weak passwords, we must start the process of requiring 2-factor use on a routine basis, both for the protection of users and of the companies that are serving them — and for the protection of society in a broader sense as well. We can no longer permit this to be simply an optional offering that vast numbers of users ignore.

This will indeed be a painful bullet to bite in some important respects. Doing 2-factor properly isn’t cheap, but it isn’t rocket science either. High quality commercial, proprietary, and open source solutions all exist. User education will be critical. There will be some user backlash to be sure. Poor quality 2-factor systems will need to be upgraded on a priority basis before the process of requiring 2-factor use can even begin.

It’s significant work, but if we care about our users (and stockholders!) we can no longer keep kicking this can down the road. 

The sorry state of most user authentication systems that don’t employ 2-factor has been a bonanza for all manner of crooks and hackers, both for the ones “only” seeking financial gain and for the ones seeking to undermine democratic processes. 

The deployment and required use of quality 2-factor systems won’t completely seal the door against these evil forces, but will definitely make their tasks significantly more difficult. 

We can no longer accept anything less.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Fake News and Google: What Does a Top Google Search Result Really Mean?

Controversy continues to rage over how Holocaust denial sites and related YouTube videos have achieved multiple top and highly-ranked search positions on Google for various forms and permutations of the question “Did the Holocaust really happen?” — and what — if anything — Google intends to ultimately do about these outright, racist lies achieving such search results prominence.

If you’re like most Internet users, you’ve been searching on Google and viewing the resulting pages of blue links for many years now.

But here’s something to ponder that you may not have ever really stopped to think about in depth: What does a top or otherwise high search result on Google really mean?

This turns out to be a remarkably complex issue.

The ranking of search results is arguably the most crucial aspect of core search functionalities. I don’t know the details of how Google’s algorithms make those determinations, and if I did know I couldn’t tell you — this is getting into “crown jewel” territory. This is one of Google’s most important and best kept secrets.

It’s not just important from business and competitive aspects, but also in terms of serving users well.

Google is continually bombarded by folks trying to use all manner of “dirty tricks” to try boost their search ranks and visibility — in the parlance of the trade, Black Hat SEO (Search Engine Optimization). Not all SEO per se is evil — simply having a well organized site using modern coding practices is essentially a kind of  perfectly acceptable and recommended “White Hat” SEO.

But if details of Google’s ranking algorithms were known, it could theoretically help underhanded players use various technical tricks to try “game” the system to achieve fraudulently high search ranks.

It’s crucial not to confuse search links that are the results of these Google algorithms — technically termed “organic” or “natural” search results — with paid ad links that may appear above those organic results. Google always clearly marks the latter as “Ad” or “Sponsored” and these must always be considered in the context of being paid insertions that are dependent on the advertisers’ continuing ability to pay for them.

Until a relatively few years ago, Google’s organic search results always represented “simply” what Google felt were the “best” or “most relevant” link results for a given user’s query.

But the whole situation became enormously more complex when Google began offering what it deemed to be actual answers to questions posed in some queries, rather than only the familiar set of links.

In simple terms, such answers are typically displayed above (and/or to the right) of the usual search result links. These can come from a wide variety of sources, often related to the top organic search result, with one prominent source being Wikipedia.

Google’s philosophy about this — repeatedly stated publicly — is that if a user is asking a straightforward question and Google knows the straightforward answer, it can make sense to provide that answer directly rather than only the pages of blue links.

This makes an enormous amount of good sense.

Yet it also introduced a massive complication which is at the foundation of the Holocaust denial and other fake news, fake information controversies.

Google Search has earned enormous trust around the world. Users assume that when Google ranks organic results to a query, it does so based on a sound, scientific analysis.

And here’s the absolutely crucial point: It is my belief, based on continuing interactions with Google users and other data I’ve been collecting over an extended period, that most Google users do not commonly differentiate between what Google considers to be “answers” and what Google considers “merely” to be ordinary search result links.

That is, users overall have come to trust Google to such an extent that they assume Google would not respond to a specific question with highly ranked links that are outright lies and falsifications.

Again, Google doesn’t consider all of those to be “specific answers” — Google rather considers the vast majority to be simply the “best” or “most relevant” links based on the internal churning of their algorithm.

Most Google users don’t make this distinction. To them, the highest ranking organic links that appear in response to questions are assumed to likely be the “correct” answers, since they can’t imagine Google knowingly highly ranking fake news or false information in response to such queries.

As Strother Martin’s character “Captain” famously proclaimed in the 1967 film “Cool Hand Luke” – “What we’ve got here is failure to communicate.”

Part of the problem is that Google’s algorithms appear outwardly to be tuned toward topics where specific answers are not controversial. It’s one thing to see a range of user-perceived answers to a question like “What is the best flavor of ice cream?” But when it comes to the truth of the Holocaust for example, there is no room for maneuvering, any more than there is when answering other fact-based questions, such as “Is the moon made of green cheese?”

Many observers are calling for Google to manually eliminate or manually downrank outright lies like the Holocaust denials.

I am unenthusiastic about such approaches. I would much prefer that scalable, automated methods be employed in these contexts whenever possible. Some governments are already proposing false “solutions” that amount to horrific new censorship regimes (that could easily make the existing and terrible EU “Right To Be Forgotten” look like a veritable picnic by comparison).

I would much prefer to see this set of issues resolved via various forms of labeling to indicate highly ranked items that are definitively false (please see: Action Items: What Google, Facebook, and Others Should Be Doing RIGHT NOW About Fake News).

Also important could be explicit notices from Google indicating that they are not endorsing such links in any way and do not represent them as being “correct answers” to the associated queries. A general educational outreach by Google to help users better understand Google’s view of what highly ranked search results actually represent, could also potentially be very useful.

As emotionally upsetting as the fake news and fake information situation has become, especially given the prominent rise of violent, racist, often politically motivated lies in this context, there are definitely ways forward out of this current set of dilemmas, so long as both we and the firms involved acknowledge that serious actions are needed and that the status quo is definitely no longer acceptable.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Administrivia: Observing Google: “Tough Love”

Lately I’ve been receiving a significant spike in email from readers asking various forms of the question:

What is your true stance regarding Google?

In particular, they seem unable to grasp how I can send out one blog post or other item that is significantly critical of some aspect of Google, then another post that is highly complimentary of a different aspect.

I view the question as frankly rather shallow and illogical. One might as well ask “What is your true opinion of life?”

Google is a great firm — a very large company of enormous complexity, operating at the leading edge of technology’s intersection with privacy, security, and one way or another, most other aspects of society.

It would be foolhardy in the extreme to evaluate Google as if it were some sort of monolithic whole (though the true “Google Haters” seem to do exactly that most of the time).

As for myself, when I believe that Google is making a mistake that is causing them to fall short of the high standards of which I feel they’re capable, I explicitly tell them so and I pull no punches in that analysis. When my view is that they’re doing great work (which is overwhelmingly more often the case) it’s my pleasure to say so clearly and explicitly.

If you wish to call this something akin to “tough love” regarding Google on my part, I won’t argue.

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Action Items: What Google, Facebook, and Others Should Be Doing RIGHT NOW About Fake News

Today is action items day, and there isn’t a moment to lose before someone gets killed as a result of the fake news scourge. It nearly happened a couple of days ago, when some wacko invaded a pizza restaurant and shot it up looking for the youthful “sex slaves” that the fake “Pizzagate” story claims exist (a total fabrication created out of whole cloth and part of the complex of fake anti-Hillary sex stories even being promoted by highly-placed wackos in Trump’s White House circle). In fact, there are already new fake stories circulating regarding the shooting itself.

There are some ongoing efforts to begin dealing with fake and false news at the big firms. Facebook appears to be running an experiment asking some users to rate how “misleading” some link titles might be. This will no doubt collect some interesting data and may be a small portion of solutions, but of course cannot alone solve the underlying problems.

Having spent enough time inside Google to have some sense of how the world looks at Google Scale (i.e. “Big” with a Capital “B”), I am convinced that efforts to deal with the Fake/False News problem must primarily be based on algorithmic, automated systems. Humans will also still have important roles to play in this process in terms of tagging, flagging, and verification at least — especially for items that are suspected or verified fakes but are still trending upward very rapidly.

So, Action Item #1: We should be looking at automated systems for doing the bulk of the first level work to detect fakes, or else we’ll be swamped from the word go.

And I believe that the foundational resources to get this done do exist. Google and Facebook (just to name two obvious examples) have powerful AI architectures that could be leveraged toward such tasks, given the will to do so.

Action Item #2: We must understand the true dynamics of how fake and false news are shared — how they rapidly reach large numbers of users and push high into search results. It’s popular to simply assert that everyone believing/sharing these fake stories are just evil or stupid (or both).

That’s way too simplistic an assertion. Even over the very short time that my factsquad.com fake news data collection effort has been active, obvious patterns in the data are already emerging.

One pattern that hits you in the face immediately is that the vast majority of users who share fake news are not stupid and not evil, but they are very much confused by the misinformation surrounding them. There’s a sense that “Well, if it looks professional, or if this ranks highly in search, or if Facebook showed it to me, or my friends shared it with me, it at least might be true, there might something to it somehow, so I’ll share it too!”

This appears to be a far, far larger group of users than the ones who are actually generating and voluntarily wallowing in this trash. In fact, the latter group is voluntarily in their own “echo chambers” — and like with most any group of dedicated haters, Internet-based efforts to change their minds will likely be wasted.

But for much a larger segment of users who are misinformed, confused, and don’t even realize that they have become involuntarily trapped in echo chambers by fake and false news, there is definitely still hope.

This emphasizes a key point that various observers including myself have previously noted. Older users and other users with less Internet experience tend to believe items that look professional, that appear to be from sources that are visually attractive and seemingly structured in a more “news traditional” manner. On the other hand, younger users or other users with more Internet experience tend to care much less — or not at all — about the “professionalism” of the source and give much more credence to items that rank highly in search, are surfaced by services like Facebook, or are widely shared by their friends.

And this gets us to the crux of the matter. By and large, the Internet economy has evolved into a click-based popularity contest. Both in terms of search and social media, it is basically designed to surface content based on how many people appear to have interest in that content. That’s somewhat a simplification of course but it’s fairly close to the mark. And let’s face it, given two stories presented as accurate — one that discusses how people eat pizza, the other an actually fake story describing a nonexistent child sex ring — which is likely to get the most clicks — and so the most revenue?

While a variety of the big fake news sites are related to persons with political motives, a large number are operated by individuals who have no political motives at all — they are “merely” enriching themselves by creating false stories that they believe will get the most shares and “engagement” clicks for their own monetary enrichment.

On the other hand, I’ll tell you as one of the individuals involved in Internet development for decades that we did not build and grow the Net to be a tool for paying people to post fake news, nor to use such false content to help elect a lying sociopath as President of the United States.

Yet the click-based Internet economy is what it is, and alternative models such as subscriptions have seen only limited success. Other concepts such as micropayments even less so.

So what are we to do? This brings us to …

Action Item #3: I continue to strongly feel that censorship is not the best answer to this set of problems, and that more information — not less — is the path toward solutions. Downranking — where fake stories would still exist but no longer be so prominently featured in search results or system shares — can be a viable approach if handled with caution. In particular, only the most serious and dangerous fake content would typically be considered for manual downranking. For most fake news situations, organic (natural) downranking is a much more desirable procedure.

And that’s where labeling comes in. If fake news that has managed to reach high search results and massive sharing were labeled as fake or in some other relevant distinctive manner, I believe that this would give some pause to that large group of confused users, result in less sharing of fakes, and ultimately in the organic downranking of many such stories.

What’s more, in comments I’ve received it’s clear that many users are desperate for help in evaluating the truth of the content that comes pouring in at them now. How can we really blame them for accepting false stories as real when we don’t even make the effort to point out and label the fakes that we definitely know about?

Obviously it’s the case that detecting, evaluating, and labeling content on an Internet scale — even if we restrict our efforts to highly trending and highly ranked items —  is a very significant undertaking, even with the best of AI resources doing the bulk of the work. Such issues as the exact wording of labels can also be complex. Do we actually want to label a known false story as “false” per se? Snopes does this successfully at their relatively limited scale, but they don’t have particularly deep pockets, either (ironically but predictably, all manner of fake news stories are written and widely promulgated against Snopes). Another approach as an alternative to a specific “false” label would be the assigning of a kind of “confidence rank” to such stories — with the known fakes perhaps getting a rank of zero.

As always, the devil is in the details, but I’m convinced that some combination of these or related concepts can be made to work, especially given that the status quo is no longer tenable.

Action Item #4: Parody as a test case. The ability of many (most?) people to recognize parody or satire on the Net (unless it is clearly labeled) can be very poor. I ran into this myself when I wrote April Fools’ columns for the CACM journal — even with that highly technical audience some readers assumed that what I thought was obvious and outrageous satire was actually real. The same thing happened with a satire video I released on YouTube years ago as well.

A significant number of the “fake news” stories are sourced from satire sites (that is, at least ostensibly satirical sites — many seem to call themselves satire in small print to try cover fake items with clearly political motives, or mix fake and real items on their sites to cause even more confusion). Yet even items from known satire sources like “The Onion” — and “Borowitz” from “The New Yorker” — frequently explode into mass visibility without any indication that they aren’t “legit” articles.
 
In some cases this is just by virtue of the fact that typical sharing or search results may give no obvious indication that these are satire or parody — and such items may be innocently shared to large numbers of persons as if they were serious items. In other cases, the sharer knows that they’re dealing with satire but purposely promotes the items as non-satire if this fits with their political agenda of the moment.
 
In either case, if such stories were clearly marked (as parody or satire, referencing the original source) in search results or in Facebook shares, Twitter feeds, etc., the purposeful and/or accidental damage they can do when they’re inappropriately interpreted by users as serious items could be significantly reduced.
 

Such specific labeling of individual items that are known to be originally sourced from self-proclaimed satire/parody sites — irrespective of their current share or search results links — could provide something of an initial proving ground for the overall labeling concept. If such items could be identified in the various search and sharing systems as having such sites as their origins, it could help to demonstrate the usefulness of this labeling technique on this specific class of material that would be relatively straightforward to target. User reactions to these labels could then be studied toward the launch of a possible much broader labeling initiative dealing with fake/false news in a more comprehensive manner.

None of this will be easy, nor are these the only possible approaches. But we must immediately begin vigorously moving down the paths towards practical solutions to the serious, rapidly escalating issues of fake news and related problems on the Internet, unless we’re satisfied to be increasingly suffocated under a growing and ultimately disastrous deluge of lies.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Study: Collecting URLs and Other Data About Fake/False News on the Net

Greetings. I have initiated a study to explore the extent of fake/false news on the Internet. Please use the form at:

https://factsquad.com

to report fake or false news found on traditional websites and/or in social media postings.

Any information submitted via this form may be made public after verification, with the exception of your name and/or email address if provided (which will be kept private and will not be used for any purposes other than this study).

URLs anywhere in the world may be reported, but please only report URLs whose contents are in English for now. Please only report URLs that are public and can be accessed without a login being required.

Thank you for participating in this study to better understand the nature and scope of fake/false news on the Net.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Google Home Drops Insightful “Donald Trump Is Definitely Crazy” Search Answer

Two days ago, I uploaded the YouTube video linked below, which recorded the insightful response I received from Google Home to the highly relevant question: “Is Donald Trump Insane?” I noted Google’s accurate appraisal on Google+ and in my various public mailing lists. The next day (yesterday) the response was (and currently is) gone for the same query to Home — replaced by the generic: “I can do a search for that.”

Interestingly, this seems to have only occurred for responses from Google Home itself. The original (text-based) answer is currently still appearing for the same query made by keyboard or voice to Google Search through conventional desktop or mobile means (however, at least for me the response is no longer being spoken out loud — and I had earlier reports that the answer response was spoken on all capable platforms).

Let’s face it — what helps to make the original answer so great is the pacing and inflections of the excellent Google Home synthetic voice! It’s just not the same reading it as text.

There would seem to be only two possibilities for what’s going on.

One possibility is that the normal churning of Google’s algorithms dropped that answer from Home (and replaced it with the generic response) solely through ordinary programmed processes.

Of course, the other possibility is that after I publicized this brilliant, wonderful, and fully accurate spoken response, it was manually excised from Home by someone at Google for reasons of their own, about which I will not speculate here and now.

Either way, the timing of this change, only hours after my release of the related video, is — shall we say — fascinating. 

https://www.youtube.com/watch?v=58R2kEL6E6Q

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

How Fake and False News Distort Google and Others

With all of the current discussions regarding the false and fake news glut on the Internet — often racist in nature, some purely domestic in origin, some now believed to be instigated by Putin’s Russia — it’s obvious that the status quo for dealing with such materials is increasingly untenable.

But what to do about all this?

As I have previously discussed, my general view is that more information — not less — is the best solution to these distortions that may have easily turned the 2016 election on its head.

Labeling, tagging, and downranking of clearly false or fake posts is an approach that can help to reduce the tendency for outright lies to be treated equivalently with truth in social media and search engines. These techniques also avoid invoking the actual removal of lying items themselves and the “censorship” issues that then may come into play (though private firms quite appropriately are indeed free to determine what materials they wish to permit and host — the First Amendment only applies to governmental restraints on speech in the USA).

How effective might such labeling be? Think about the labeling of “fake news” in the same sort of vein as the health warnings on cigarette packs. We haven’t banned cigarettes. Some people ignore the health warnings, and many people still smoke in the USA. But the number of people smoking has dropped dramatically, and studies show that those health warnings have played a major role in that decrease.

Labeling fake and false news to indicate that status — and there’s a vast array of such materials where no reasonable arguments that they are not untrue can reasonably exist — could have a dramatic positive impact. Controversial? Yep. Difficult? Sure. But I believe that this can be approached gradually, starting with top trending stories and top search results.

A cure-all? No, just as cigarette health warnings haven’t been cure-alls. But many lives have still been saved. And the same applies to dealing with fake news and similar lies masquerading as truthful posts.

Naysayers suggest that it’s impossible to determine what’s true or isn’t true on the Internet, so any attempts to designate anything that’s posted as really true or false must fail. This is nonsense. And while I’ve previously noted some examples (Man landing on the moon, Obama born in Hawaii) it’s not hard to find all manner of politically-motivated lies that are also easy to ferret out as well.

For example, if you currently do a Google search (at least in the USA) for:

southern poverty law center

You will likely find an item on the first page of results (even before some of the SPLC’s own links) from online Alt-Right racist rag Breitbart — whose traditional overlord Steve Bannon has now been given a senior role in the upcoming Trump administration.

The link says:

FBI Dumps Southern Poverty Law Center as Hate Crimes Resource

Actually, this is a false story, dating back to 2014. It’s an item that was also picked up from Breitbart and republished by an array of other racist sites who hate the good work of the SPLC fighting both racism and hate speech.

Now, look elsewhere on that page of Google search results — then on the next few pages. No mention of the fact that the original story is false, that even the FBI itself issued a statement noting that they were still working with the SPLC on an unchanged basis.

Instead of anything to indicate that the original link is promoting a false story, what you’ll mostly find on succeeding pages is more anti-SPLC right-wing propaganda.

This situation isn’t strictly Google’s fault. I don’t know the innards of Google’s search ranking algorithms, but I think it’s a fair bet that “truth” is not a major signal in and of itself. More likely there’s an implicit assumption — which no longer appears to necessarily hold true — that truthful items will tend to rise to the top of search results via other signals that form inputs to the ranking mechanisms.

In this case, we know with absolute certainly that the original story on page one of those results is a continuing lie, and the FBI has confirmed this (in fact, anyone can look at the appropriate FBI pages themselves and categorically confirm this fact as well).

Truth matters. There is no equivalency between truth and lies, or otherwise false or faked information.

In my view, Google should be dedicated to the promulgation of widely accepted truths whenever possible. (Ironic side note: The horrible EU “Right To Be Forgotten” — RTBF — that has been imposed on Google, is itself specifically dedicated to actually hiding truths!)

As I’ve suggested, the promotion of truth over lies could be accomplished both by downranking of clearly false items, and/or by labeling such items as (for example) “DEEMED FALSE” — perhaps along with a link to a page that provides specific evidence supporting that label (in the SPLC example under discussion, the relevant page of the FBI site would be an obvious link candidate).

None of this is simple. The limitations, dynamics, logistics, and all other aspects of moving toward promoting truth over lies in social media and search results will be an enormous ongoing effort — but a critically crucial one.

The fake news, filter bubbles, echo chambers, and hate speech issues that are now drowning the Internet are of such a degree that we need to call a major summit of social media and search firms, experts, and other concerned parties on a multidisciplinary basis to begin hammering out practical industry-wide solutions. Associated working groups should be established forthwith.

If we don’t act soon, we will be utterly inundated by the false “realities” that are being created by evil players in our Internet ecosystems, who have become adept at leveraging our technology against us — and against truth.

There is definitely no time to waste.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Blocked by Lauren (“The Motion Picture”)

With nearly 400K Google+ followers, I’ve needed to block “a few” over the years to keep order in the comment sections of my threads. I’m frequently asked for that list — which of course is composed entirely of public G+ profile information. But as far as I know there is no practical way to export this data in textual form. However, when in doubt, make a video! By the way, I do consider unblocking requests, and frequently unblock previously blocked profiles as a result, depending on specific circumstances. Happy Thanksgiving!

https://www.youtube.com/watch?v=GX79fYTSjFE

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Facebook, Google, Twitter, and Others: Start Taking Hate Speech Seriously!

Recently, in Crushing the Internet Liars, I discussed issues relating to the proliferation of “fake news” on the Internet (via social media, search, and other means) and the relationship of personalization-based “filter bubbles” and “echo chambers” — among other effects.

A tightly related set of concerns, also rising to prominence during and after the 2016 election, are the even broader concepts of Internet-based hate speech and harassment. The emboldening of truly vile “Alt-Right” and other racist, antisemitic white supremacist groups and users in the wake of Trump’s election has greatly exacerbated these continuing offenses to ethics and decency (and in some cases, represent actual violations of law).

Lately, Twitter has been taking the brunt of public criticism regarding harassment and hate speech — and their newly announced measures to supposedly combat these problems seem to mostly be potentially counterproductive “ostrich head in the sand” tools that would permit offending tweets to continue largely unabated.

But all social media suffers from these problems to one degree or another, and I feel it is fair to say that no major social media firm really takes hate speech and harassment seriously — or at least as seriously as ethical firms must.

To be sure, all significant social media companies provide mechanisms for reporting abusive posts. Some systems pair these with algorithms that attempt to ferret out the worst offenders proactively (though hate users seem to quickly adapt to bypass these as rapidly as the algorithms evolve).

Yet one of the most frequent questions I receive regarding social media is “How do I report an abusive posting?” Another is “I reported that horrible posting days ago, but it’s still there, why?”

The answer to the first question is fairly apparent to most observers — most social media firms are not particularly interested in making their abuse reporting tools clear, obvious, and plainly visible to both technical and nontechnical users of all ages. Often you must know how to access posting submenus to even reach the reporting tools.

For example, if you don’t know what those three little vertical dots mean, or you don’t know to even mouse over a posting to make those dots appear — well, you’re out of luck (this is a subset of a broader range of user interface problems that I won’t delve into here today).

The second question — why aren’t obviously offending postings always removed when reported — really needs a more complex answer. But to put it simply, the large firms have significant problems dealing with abusive postings at the enormous scales of their overall systems, and the resources that they have been willing to put into the reporting and in some cases related human review mechanisms have been relatively limited — they’re just not profit center items.

They’re also worried about false abuse reports of course — either purposeful or accidental — and one excuse used for “hiding” the abuse reporting tools may be to try reduce those types of reports from users.

All that having been said, it’s clear that the status quo when it comes to dealing with hate speech or harassing speech on social media is no longer tenable.

And before anyone has a chance to say, “Lauren, you’re supposed to be a free speech advocate. How can you say this?”

Well, it’s true — I’m a big supporter of the First Amendment and its clauses regarding free speech.

But what is frequently misunderstood, is that this only applies to governmental actions against free speech — not to actions by individuals, private firms, or other organizations who are not governmental entities.

This is one reason why I’m so opposed to the EU’s horrific “Right To Be Forgotten” (RTBF) — it’s governments directly censoring the speech of third parties. It’s very wrong.

Private firms though most certainly do have the right to determine what sorts of speech they choose to tolerate or support on their platforms. That includes newspapers, magazines, conventional television networks, and social media firms, to name but a few.

And I assert that it isn’t just the right of these firms to stamp out hate speech and harassment on their platforms, but their ethical responsibility to do so as well.

Of course, if the Alt-Right or other hate groups (and certainly the right-wing wackos aren’t the only offenders) want to establish their own social media sites for that subset of hate speech that is not actually illegal — e.g. the “Trumpogram” service — they are free to do so. But that doesn’t mean that the Facebooks, Googles, and Twitters of the world need to permit these groups’ filth on their systems.

Abusive postings in terms of hate speech and harassing speech certainly predate the 2016 election cycle, but the election and its aftermath demonstrate that the major social media firms need to start taking this problem much more seriously — right now. And this means going far beyond rhetoric or public relations efforts. It means the implementation of serious tools and systems that will have real and dramatic impacts on helping to stamp out the postings of the hate and other abuse mongers in our midst today.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Unacceptable: How Google Undermines User Trust by Blocking Users from Their Own Data

UPDATE (November 18, 2016): After much public outcry, Google has now reversed the specific Pixel-related Google account bans noted in this post. Unfortunately, the overall Whose data is it? problem discussed in this post persists, and it’s long since time for Google to appropriately address this issue, which continues to undermine public user trust in a fine company.

– – –

There are times when Google is in the right. There are times when Google is in the wrong. By far, they’re usually on the angels’ side of most issues. But there’s one area where they’ve had consistent problems dating back for years: Cutting off users from those users’ own data when there’s a dispute regarding Google Account status.

A new example of this recurring problem — an issue about which I’ve heard from large numbers of Google users over time — has just surfaced. In this case, it involves the reselling of Google Pixel phones in a manner that apparently violates the Google Terms of Service, with the result that a couple of hundred users have reportedly been locked out of their Google accounts and all of their data, at least for now. 

This means that they’re cut off from everything they’ve entrusted to Google — mail, documents, photos, the works.

Here and now, I’m not going to delve into the specifics of this case — I don’t know enough of the details yet. The entire area of Google accounts suspension, closure, recovery, and so on is complex to say the least. Most times (but not always) users are indeed at fault — one way or another — when these kinds of events are triggered. And the difficulty of successfully appealing a Google account suspension or closure has become rather legendary.

Even recovering a Google account due to the loss of a password can be difficult if you haven’t taken proactive steps to aid in that process ahead of time —steps that I’ve previously discussed in detail.

But the problem of what happens to users’ data when they can’t access their accounts — for whatever reasons — is something that I’ve personally been arguing with Google about literally for years, without making much headway at all.

Google has excellent mechanisms for users to download their data while they still have account access. Google even has systems for you to specify someone else who would have access to your account in case of emergency (such as illness or accident), and policies for dealing with access to accounts in case of the death of an account holder.

The reality though, is that users have been taught to trust Google with ever more data that is critical to their lives, and most people don’t usually think about downloading that data proactively.

So when something goes wrong with their account, and they lose access to all of that data, it’s like getting hit with a ton of bricks.

Again, this is not to say that users aren’t often — in fact usually — in the wrong (at least in some respect) when it comes to account problems. 

But unless there is a serious — and I mean serious (like child porn, for example) criminal violation of law — my view is that in most circumstances users should have some means to download their data from their account even if it has been suspended or closed for good cause. 

If they can’t use Google services again afterwards, them’s the breaks. But it’s still their own data we’re talking about, not Google’s.

Google has been incredibly resistant to altering this aspect of their approach to user account problems. I am not ignorant of their general reasoning in this category of cases — but I strongly believe that they are wrong with their essentially one size fits all “death penalty” regime in this context.

Nobody is arguing that there aren’t some specific situations where blocking a violating user (or a user accused of violations) from accessing their data on Google services is indeed justified. But Google doesn’t seem to have any provisions for anything less than total data cutoff when there’s a user account access issue, even when significant relevant legal concerns are not involved.

This continuing attitude by Google does not engender user trust in Google’s stewardship of user data, even though most users will never run afoul of this problem.

These kinds of actions by Google provide ammunition to the Google Haters and are self-damaging to a great firm and the reputation of Googlers everywhere, some of whom have related to me their embarrassment at trying to explain such stories to their own friends and families.

Google must do better when it come to this category of user account issues. And I’ll keep arguing this point until I’m blue in the face and my typing fingertips are bruised. C’mon Google, please give me a break!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Crushing the Internet Liars

Frankly, this isn’t the post that I had originally intended. I had a nearly completed blog draft spinning away happily on a disk, a draft that presented a rather sedate, scholarly, and a bit introspective discussion of how Internet-based communications evolved to reach the crisis point we now see regarding misinformation, filter bubbles, and so-called echo chambers in search and social media.

I just trashed that draft. Bye!

Numerous times over the years I’ve tried the scholarly approach in various postings regarding the double-edged sword of Internet personalization systems — capable of bringing both significant benefits to users but also carrying significant and growing risks.

Well, given where we stand today after the 2016 presidential election, it appears that I might have just as well been doing almost anything else rather than bothering to write that stuff. Toenail trimming would have likely been a more fruitful use of my time.

So now — today — we must deal with this situation while various parties are hell-bent toward turning the Internet into a massive, lying propaganda machine to subvert not only the democratic process, but our very sense of reality itself.

Much of this can be blamed on the concept of “false equivalency” — which runs rampant on cable news, mainstream Internet news sites, and throughout social media such as Facebook (which is taking the brunt of criticism now — and rightly so), plus on other social media ecosystems.

Fundamentally, this idea holds that even if there is widespread agreement that a particular concept is fact, you are somehow required to give “equal time” to wacko opposing views.

This is why you see so much garbage prominently surfaced from nutcases like Alex Jones — who believes the U.S. government blew up the World Trade Center buildings — or Donald Trump and his insane birther attacks on Obama, that Trump used to jump-start his presidential campaign. It doesn’t take more than half a brain to know that such statements are hogwash.

To be sure, it’s difficult to really know whether such perpetually lying creatures actually believe what they’re saying — or are simply saying outrageous things as leverage for publicity. In the final analysis though, it doesn’t much matter what their motives really are, since the damage done publicly is largely equivalent either way.

The same can be said for the wide variety of fake postings and fake news sites that increasingly pollute the Net. Do they believe what they say, or are they simply churning out lies on a twisted “the end justifies the means” basis? Or are they just sick individuals promoting hate (often racism and antisemitism) and chaos? No doubt all of the above apply somewhere across the rapidly growing range of offenders, some of whom are domestic in nature, and some who are obviously operating under the orders of foreign leaders such as Russia’s Putin.

Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.

And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.

Facebook’s Mark Zuckerberg is publicly denying that Facebook has a significant role in the promotion of lies. He denies that Facebook’s algorithms for controlling which postings users see creates echo chambers where users only see what they already believe, causing lies and distortions to spread ever more widely without truth having a chance to invade those chambers. But Facebook’s own research tells a very different story, because Facebook insists that exactly those kinds of controlling effects occur to the benefit of Facebook’s advertisers.

Yet this certainly isn’t just a Facebook problem. It covers the gamut of social media and search.

And the status quo can no longer be tolerated.

So where do we go from here?

Censorship is not a solution, of course. Even the looniest of lies, assuming the associated postings are not actually violating laws, should not be banned from visibility.

But there is a world of difference between a lying post existing, vis-a-vis the actual widespread promotion of those lies by search and social media.

That is, simply because a lying post is being viewed by many users, there’s no excuse for firms’ algorithms to promote such a post to a featured or other highly visible status, creating a false equivalency of legitimacy by virtue of such lies being presented in close proximity to actual facts.

This problem becomes particularly insidious when combined with personalization filter bubbles, because the true facts are prevented from penetrating users’ hermetically sealed social media worlds that have filled with false postings.

And it gets worse. Mainstream media in a 24/7 news cycle is hungry for news, and all too often now, the lies that germinate in those filter bubbles are picked up by conventional media and mainstream news sites as if they were actual true facts. And given the twin realities of reduced budgets and beating other venues to the punch, such lies frequently are broadcast by such sites without any significant prior fact checking at all.

So little by little, our sense of what is actually real — the differences between truth and lies — becomes distorted and diluted.

Again, censorship is not the answer.

My view is that more information — not less information — is the path toward reducing the severity of these problems.

Outright lies must not continue to be given the same untarnished prominence as truth in search results and in widely seen social media postings.

There are multiple ways to achieve this result.

Lying sites in search results can be visibly and prominently tagged as such in those results, be downranked, or both. Similar principles can apply to widely shared social media posts that currently are featured and promoted by social media sites primarily by virtue of the number of persons already viewing them. Because — lets face it — people love to view trash. Lots of users viewing and sharing a post does not make it any less of a lie.

As always, the devil is in the details.

This will be an enormously complex undertaking, involving technology, policy, public relations, and the law. I won’t even begin to delve into the details of all this here, but I believe that with sufficient effort — effort that we must now put forth — this is a doable concept.

Already, whenever such concepts are brought up, you quickly tend to hear the refrain: “Who are you to say what’s a fact and what’s a lie?”

To which I reply: “To hell with false equivalences!”

Man landed on the moon. Obama was born in Hawaii. Terrorists destroyed the World Trade Center with jet aircraft. Hillary Clinton never said she was going to abolish the Second Amendment. Donald Trump did say that he supported the war in Iraq. Denzel Washington did not say that he supported Trump. On and on and on.

There are a virtually endless array of stated facts that reasonable people will agree are correct. And if the nutcases want to promote their own twisted views on these subjects that’s also fine — but those postings should be clearly labeled for what they are — not featured and promoted. As the saying goes, they’re free to have their own opinions, but not their own facts.

Obviously, this leaves an enormous range of genuinely disputed issues where the facts are not necessarily clear, often where only opinion and/or philosophy really apply. That’s fine too. They’re out of scope for these discussions or efforts.

But the outright Internet liars must be crushed. They shouldn’t be censored, but they must no longer be permitted to distort and contaminate reality by being treated on an equal footing with truth by major search and social media firms.

We built the Internet juggernaut. Now it’s our job to fix it where it’s broken.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

President Trump and the Nuclear Launch Codes

selection_573

selection_571

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Elections and the Internet “Echo Chambers”

Back in a 2010 blog post, I noted the kinds of “echo chamber” effects that can result from personalization and targeting of various types of information on the Web. That particular posting concentrated on search personalization, but also noted the impact on Internet-based discussions, a situation that has become dramatically more acute with the continuing rise of social media. Given the current controversies regarding how “filter bubbles” and other algorithmically-driven information surfacing and restriction systems may impact users’ views and potentially increase political and religious radicalization — particularly in relation to the 2016 elections here in the USA — I believe it is relevant to republish that posting today, which is included below. Also of potential interest is my recently reposted item related to Internet fact checking.

Search Personalization: Blessing and Trap?
(Original posting date: September 16, 2010)

Greetings. Arguably the holy grail of search technology — and of many other aspects of Internet-based services today, is personalization. Providing users with personalized search suggestions, search results, news items, or other personalized services as quickly as possible, while filtering out “undesired” information, is a key focus not only of Google but of other enterprises around the world.

But does too much reliance on personalization create an “echo chamber” effect, where individuals are mainly (or perhaps totally) exposed to information that only fits their predetermined views? And if so, is this necessarily always beneficial to those individuals? What about for society at large?

Diversity of opinions and information is extremely important, especially today in our globally interconnected environment. When I do interviews on mainstream radio programs about Internet issues, it’s usually on programs where the overall focus is much more conservative than my own personal attitudes. Yet I’ve found that even though there’s often a discordance between the preexisting views of most listeners and my own sentiments, I typically get more insightful questions during those shows than in the venues where I spend most of my time online.

And one of the most frequent questions I get afterwards from listeners contacting me by email is: “How come nobody explained this to me that way before?”

The answer usually is that personalized and other limited focus information sources (including some television news networks) never exposed those persons to other viewpoints that might have helped them fully understand the issues of interest.

An important aspect of search technology research should include additional concentration on finding ways to avoid potential negative impacts from personalized information sources — particularly when these have the collateral effect of “shutting out” viewpoints, concepts, and results that would be of benefit both to individuals and to society.

Overall, I believe that this is somewhat less of a concern with “direct” general topic searches per se, at least when viewed as distinct from search suggestions. But as suggestions and results become increasingly commingled, this aspect also becomes increasingly complex. (I’ve previously noted my initial concerns in this respect related to the newly deployed Google Instant system).

Suggestions would seem to be an area where “personalization funneling” (I may be coining a phrase with this one) would be of more concern. And in the world of news searches as opposed to general searches, there are particularly salient related issues to consider (thought experiment: if you get all of your information from FOX News, what important facts and contexts are you probably missing?)

While there are certainly many people who (for professional or personal reasons) make a point to find and cultivate varied and opposing opinions, not doing so becomes much easier — and seemingly more “natural” — in the Internet environment. At least the possibility of serendipitous exposure to conflicting points of view was always present when reading a general audience newspaper or magazine, for example. But you can configure many Web sites and feeds to eliminate all but the narrowest of opinions, and some personalization tools are specifically designed to enhance this effect.

As our search and related tools increasingly focus on predicting what we want to see and avoiding showing us anything else (which naturally enough makes sense if you want to encourage return visits and show the most “attractive” ads to any given individual), the funneling effect of crowding out other materials of potential value appears to be ever more pronounced.

Add to that the “preaching to the choir” effect in many Internet discussions. True, there are forums with vibrant exchanges of views and conflicting opinions. But note how much of our Twitter and Buzz feeds are depressingly dominated by a chorus of “Attaboy!” yells from “birds of a feather” like-minded participants.

I am increasingly concerned that technologically-based Internet personalization — despite its many extremely positive attributes — also carries with it the potential for significant risks that are apparently not currently receiving the research and policy attention that they deserve.

If we do choose to assign some serious thinking to this dilemma, we certainly have the technological means to adjust our race toward personalization in ways that would help to balance out the equation.

This definitely does not mean giving up the benefits of personalization. However, we can choose to devote some of the brainpower currently focused on figuring out what we want to see, and work also toward algorithms that can help determine what we need to see.

In the process, this may significantly encourage society’s broader goals of cooperation and consensus, which of necessity require — to some extent at least — that we don’t live our entire lives in confining information silos, ironically even while we’re surrounded by the Internet’s vast buffet of every possible point of view.

 – – –

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Home Will Change the World

Much has recently been written about Google Home, the little vase-like cylinder that started landing in consumers’ hands only a week or so ago. Home’s mandate sounds simple enough in theory — listen to a room for commands or queries, then respond by voice and/or with appropriate actions.

What hasn’t been much discussed however, is how the Home ecosystem is going to change for the better the lives of millions to billions of people over time, in ways that most of us couldn’t even imagine today. It will drastically improve the lives of vast numbers of persons with visual and/or motor impairments, but ultimately will dramatically and positively affect the lives of everyone else as well.

Home isn’t the first device to offer this technology segment — nor is it the least expensive — Amazon came earlier and has a more limited version that is cheaper than Home (and a model more expensive than Home as well).

But while Amazon’s device seems to have been designed with buying stuff on Amazon as its primary functionality, Google’s Home — backed by Google’s enormously more capable corpus of information, accurate speech recognition, and AI capabilities, stands to quickly evolve to far outpace Amazon’s offering along all vectors.

This is a truth even if we leave aside the six-month free subscription to Google’s excellent ad-free “YouTube Red/Google Play Music” — which Google included with my Home shipment here in the USA, knowing that once you’ve tasted the ability to play essentially any music and any YouTube videos at any time just by speaking to the air, you’ll have a difficult time living without it. I’ve had Home for a week and I’m finally listening to great music of all genres again — I know that I’ll be subscribing when my free term to that package runs out.

You can dig around a bit and easily find a multitude of reviews that discuss specifics of what Home does and how you use it, so I’m not going to spend time on that here, other than to note that like much advanced technology that is simple to operate, the devilishly complex hardware and software design aspects won’t be suspected or understood by most users — nor is there typically a need for them to do so.

But what I’d like to ponder here is why this kind of technology is so revolutionary and why it will change our world.

Throughout human history, pretty much any time you wanted information, you had to physically go to it in one way or another. Dig out the scroll. Locate the book. Sit down at the computer. Grab the smartphone.

The Google Home ecosystem is a sea change. It’s fundamentally different in a way that is much more of a giant leap than the incremental steps we usually experience with technology.

Because for the first time in most of our experiences, rather than having to go to the information, the information is all around us, in a remarkably ambient kind of way.

Whether you’re sitting at a desk at noon or in bed sleepless in the middle of the night, you have but to verbally express your query or command, and the answers, the results, are immediately rendered back to you. (Actually, you first speak the “hotword” — currently either “Hey Google” or “OK Google” — followed by your command or query. Home listens locally for the hotword and only sends your following utterance up to Google for analysis when the hotword triggers — which is also indicated by lights on the Home unit itself. There’s also a switch on the back of the device that will disable the microphone completely.)

It’s difficult to really express how different this is from every other technology-based information experience. In a matter of hours of usage, one quickly begins to think of Home as a kind of friendly ethereal entity at your command, utterly passive until invoked. It becomes very natural to use — the rapid speed of adaptation to using Home is perhaps not so remarkable when you consider that speech is the human animal’s primary evolved mode of communications. Speech works with other humans, to some extent with our pets and other animals — and it definitely works with Google Home.

Most of the kinds of commands and queries that you can give to Home can also be given to your smartphone running Google’s services — in fact they both basically access the same underlying “Google Assistant” systems.

But when (for example) information and music are available at any time, at the spur of the moment, for any need or whim — just by speaking wherever you happen to be in a room and no matter the time of day — it’s really an utterly different emotional effect.

And it’s an experience that can easily make one realize that the promised 21st century really has now arrived, even if we still don’t have the flying cars.

The sense of science fiction come to life is palpable.

The Google teams who created this tech have made no secret of the fact that the computers of “Star Trek” have been one of their key inspirations.

There are various even earlier scifi examples as well, such as the so-called “City Fathers” computers in James Blish’s “Cities in Flight” novels. 

It’s obvious how Google Home technology can assist the blind, persons with other visual impairments, and a wide variety of individuals with mobility restrictions.

Home’s utility in the face of simple aging (and let’s face it, we’re all either aging or dead) is also immense. As I noted back in As We Age, Smartphones Don’t Make Us Stupid — They’re Our Saviors, portable information aids can be of great value as we get older.

But Home’s “always available” nature takes this to an entirely new and higher level.

The time will come when new homes will be built with such systems designed directly into their walls, and when people may feel a bit naked in locations where such capabilities are not available. And in fact, in the future this may be the only way that we’ll be able to cope with the flood of new and often complex information that is becoming ever more present in our daily lives.

Perhaps most telling of all is the fact that these systems — as highly capable as they are right now — are only at the bare beginnings of their evolution, an evolution that will reshape the very nature of the relationship between mankind and access to information.

If you’re interested in learning more about all this, you’re invited to join my related Google+ Community which is covering a wide range of associated topics.

Indeed — we really are living in the 21st century!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!