“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today

Originally posted February 27 and 28, 2009:
Google’s “Failure to Communicate” vs. User Support
and
Google Ombudsman (Part II)

Greetings. There’s been a lot of buzz around the Net recently about Google Gmail outages, and this has brought back to the surface a longstanding concern about the public’s ability (or lack thereof) to communicate effectively with Google itself about problems and issues with Google services.

I’ll note right here that Google usually does provide a high level of customer support for users of their paid services. And I would assert that there’s nothing wrong with Google providing differing support levels to paying customers vs. users of their many free services.

But without a doubt, far and away, the biggest Google-related issue that people bring to me is a perceived inability to effectively communicate with Google when they have problems with free Google services — which people do now depend on in many ways, of course. These problems can range from minor to quite serious, sometimes with significant ongoing impacts, and the usual complaint is that they get no response from submissions to reporting forms or e-mailed concerns.

On numerous occasions, when people bring particular Google problems to my attention, I have passed along (when I deemed it appropriate) some of these specific problems to my own contacts at Google, and they’ve always been dealt with promptly from that point forward. But this procedure can’t help everyone with such Google-related issues, of course.

I have long advocated (both privately to Google and publicly) that Google establish some sort of public Ombudsman (likely a relatively small team) devoted specifically to help interface with the public regarding user problems — a role that requires a skillful combination of technical ability, public relations, and “triage” skills. Most large firms that interact continually with the public have teams of this sort in one form or another, often under the label “Ombudsman” (or sometimes “Office of the President”).

The unofficial response I’ve gotten from Google regarding this concept has been an expression of understanding but a definite concern about how such an effort would scale given Google’s user base.

I would never claim that doing this properly is a trivial task — far from it. But given both the horizontal and vertical scope of Google services, and the extent to which vast numbers of persons now depend on these services in their everyday personal and business lives, I would again urge Google to consider moving forward along these lines.

–Lauren–

 – – –

Greetings. In Google’s “Failure to Communicate” vs. User Support, I renewed my long-standing call for an Ombudsman “team” or equivalent communications mechanism for Google.

Subsequent reactions suggest that some readers may not be fully familiar with the Ombudsman concept, at least in the way that I use the term.

An Ombudsman is not the same thing as “customer support” per se. I am not advocating a vast new Google customer service apparatus for users of their free services. Ombudsmen (Ombudswomen? Let’s skip the politically correct linguistics for now …) aren’t who you go to when search results are slow or you can’t log in to Gmail for two hours. These sorts of generally purely technical issues are the vast majority of the time suitable for handling within the normal context of existing online reporting forms and the like. (I inadvertently may have caused some confusion on this point by introducing my previous piece with a mention of Gmail problems — but that was only meant in the sense that those problems triggered broader discussions, not a specific example of an issue appropriate for escalation to an Ombudsman.)

But there’s a whole different class of largely non-technical (or more accurately, mixed-modality) issues where Google users appear to routinely feel frustrated and impotent to deal with what they feel are very disturbing situations.

Many of these relate to perceived defamations, demeaning falsehoods, systemic attacks, and other similar concerns that some persons feel are present in various Google service data (search results, Google Groups postings, Google-hosted blog postings, YouTube, and so on).

By the time some of these people write to me, they’re apparently in tears over the situations, wondering if they should spend their paltry savings on lawyers, and generally very distraught. Their biggest immediate complaints? They don’t know who to contact at Google, or their attempts at contact via online forms and e-mail have yielded nothing but automatic replies (if that).

And herein resides the crux of the matter. I am a very public advocate of open information, and a strong opponent of censorship. I won’t litter this posting with all of the relevant links. I have however expressed concerns about the tendency of false information to reside forever in search results without mechanisms for counterbalancing arguments to be seen. In 2007 I discussed this in Search Engine Dispute Notifications: Request For Comments and subsequent postings. This is an exceedingly complex topic, with no simple solutions.

In general, my experience has been that many or most of the concerns that people bring forth in these regards are, all aspects of the situation considered fairly, not necessarily suitable for the kinds of relief that the persons involved are seeking. That is, the level of harm claimed often seems insufficient, vs. free speech and the associated rights of other parties.

However, there are dramatic, and not terribly infrequent exceptions that appear significantly egregious and serious. And when these folks can’t get a substantive reply from Google (and can’t afford a lawyer to go after the parties who actually have posted or otherwise control the information that Google is indexing or hosting) these aggrieved persons tend to be up you-know-what creek.

If you have a DMCA concern, Google will normally react to it promptly. But when the DMCA is not involved, trying to get a real response from Google about the sorts of potentially serious concerns discussed above — unless you have contacts that most people don’t have — can often seem impossible.

Google generally takes the position — a position that I basically support — that since they don’t create most content, the responsibility for the content is with the actual creator, the hosting Web sites, and so on. But Google makes their living by providing global access to those materials, and cannot be reasonably viewed as being wholly separated from associated impacts and concerns.

At the very least, even if requests for deletions, alterations, or other relief are unconvincing or rejected for any number of quite valid reasons, the persons who bring forth these concerns should not be effectively ignored. They deserve to at least get a substantive response, some sort of hearing, more than a form-letter automated reply about why their particular plea is being rejected. This principle remains true irrespective of the ultimate merits or disposition of the particular case.

And this is where the role of a Google Ombudsman could be so important — not only in terms of appropriately responding to these sorts of cases, but also to help head off the possibility of blowback via draconian regulatory or legislative actions that might cut deeply into Google’s (and their competitors) business models — a nightmare scenario that I for one don’t want to see occur.

But I do fear that unless Google moves assertively toward providing better communications channels with their users for significant issues — beyond form responses and postings in the official Google blogs, there are forces that would just love to see Google seriously damaged who will find ways to leverage these sorts of issues toward that end — evidence of this sort of positioning by some well-heeled Google haters is already visible.

Ombudsmen are all about communication. For any large firm that is constantly dealing with the public, especially one operating on the scope of Google, it’s almost impossible to have too much communication when it comes to important problems and related issues. On the other hand, too little communications, or the sense that concerned persons are being ignored, can be a penny-wise but pound-foolish course with negative consequences that could have been — even if not easily avoided– at least avoided with a degree of serious effort.

–Lauren–

The YouTube Racists Fight Back!

Somewhat earlier today I received one of those “Hey Lauren, you gotta look at this on YouTube!” emails. Prior to my recently writing What Google Needs to Do About Hate Speech, such a message was as likely to point at a particularly cute cat video or a lost episode of some 60s television series as anything else. Since that posting, however, these alerts are far more likely to direct me toward much more controversial materials.

Such was the case today. Because the YouTube racists, antisemites, and their various assorted lowlife minions are at war. They’re at war with YouTube, they’re at war with the Wall Street Journal. They’re ranting and raving and chalking up view counts on their YouTube live streams and uploads today that ordinary YouTube users would be thankful to accumulate over a number of years.

After spending some time this afternoon lifting up rotting logs to peer at the maggots infesting the seamy side of YouTube where these folks reside, here’s what’s apparently going on, as best as I can understand it right now.

The sordid gang of misfits and losers who create and support the worst of YouTube content — everybody from vile PewDiePie supporters to hardcore Nazis, are angry. They’re angry that anyone would dare to threaten the YouTube monetization streams that help support their continuing rivers of hate speech. Any moves by Google or outside entities that appear to disrupt their income stream, they characterize as efforts to “destroy the YouTube platform.”

Today’s ongoing tirade appears to have been triggered by claims that the Wall Street Journal “faked” the juxtaposition of specific major brand ads with racist videos, as part of the ongoing controversies regarding YouTube advertiser controls. It seems that the creators of these videos are claiming that the videos in question were not being monetized during the period under discussion, or otherwise couldn’t have appeared in the manner claimed by the WSJ.

This gets into a maze of twisty little passages very quickly, because when you start digging down into these ranting videos today, you quickly see how they are intertwined with gamer subcultures, right-wing “fake news” claims, pro-Trump propagandists, and other dark cults — as if the outright racism and antisemitism weren’t enough.

And this is where the true irony breaks through like a flashing neon sign. These sickos aren’t at all apologetic for their hate speech videos on YouTube, they’re simply upset when Google isn’t helping to fund them.

I’ve been very clear about this. I strongly feel that these videos should not be on YouTube at all, whether monetized or not.

For example, one of the videos being discussed today in this context involves the song “Alabama Nig—.” If you fill in the dashes and search for the result on YouTube, you’ll get many thousands of hits, all of them racist, none of which should be on YouTube in the first place.

Which all suggests that the arguments about major company ads on YouTube hate speech videos, and more broadly the issues of YouTube hate speech monetization, are indeed really just digging around the edges of the problem.

Hate speech has no place on YouTube. Period. Google’s Terms of Service for YouTube explicitly forbid racial, religious, and other forms of this garbage.

The sooner that Google seriously enforces their own YouTube terms, the sooner that we can start cleaning out this hateful rot. We’ve permitted this disease to grow for years on the Internet thanks to our “anything goes” attitude, contributing to a horrific rise in hate throughout our country, reaching all the way to the current occupant of the Oval Office and his cronies.

This must be the beginning of the end for hate speech on Youtube.

–Lauren–

My Brief Radio Discussion of the GOP’s Horrendous Internet Privacy Invasion Law

An important issue that I’ve frequently discussed here and in other venues is the manner in which Internet and other media “filter bubbles” tend to cause us to only expose ourselves to information that we already agree with — whether it’s accurate or not.

That’s one reason why I value my continuing frequent invitations to discuss technology and tech policy topics on the extremely popular late night “Coast to Coast AM” national radio show. Talk radio audiences tend to be very conservative, and the willingness  of the show to repeatedly share their air with someone like me (who doesn’t fit the typical talk show mold and who can offer a contrasting point of view) is both notable and praiseworthy.

George Noory is in my opinion the best host on radio — he makes every interview a pleasure for his guests. And while the show has been known primarily over the years for discussions of — shall we say — “speculative” topics, it also has become an important venue for serious scientists and technologists to discuss issues of importance and interest (see: Coast to Coast AM Is No Wack Job).

Near the top of the show last night I chatted with George for a few minutes about the horribly privacy-invasive new GOP legislation that permits ISPs to sell customers’ private information (including web browsing history and much more) without prior consent. This morning I’ve been receiving requests for copies of that interview, so (with the permission of the show for posting short excerpts) it’s provided below.

Here is an audio clip of the interview for download. It’s under four minutes long. Or you can play it here:

[/audi

As I told George, I’m angry about this incredibly privacy-invasive legislation . If you are too, I urge you to inform the GOP politicos who pushed this nightmare law — to borrow a phrase from the 1976 film “Network” — that you’re mad as hell and you’re not going to take this anymore!

–Lauren–

Google+ and the Notifications Meltdown

I’ve been getting emails recently from correspondents complaining that I have not responded to their comments/postings on Google+. I’ve just figured out why.

The new (Google unified) Google+ desktop notification panel is losing G+ notifications left and right. For a while I thought that all of the extra notifications I was seeing when I checked on mobile occasionally were dupes — but it turns out that most of them are notifications that were never presented to me on desktop, in vast numbers.

Right now I can find (on the essentially unusable G+ desktop standalone notifications page, which requires manually clicking to a new page for each post!) about 30 recent G+ notifications that were never presented to me in the desktop notification panel. I’m not even sure how to deal with them now in a practical manner.

This is unacceptable — you have one job to do, notifications panel, and that’s to accurately show me my damned notifications!

Also, a high percentage of the time when I click on actions in the new desktop notification panel pop-up boxes (e.g. to reply), the panel blows away and I’m thrown to a new G+ page tab.

Does anyone at G bother to stress test this stuff any more in the context of users with many followers (I have nearly 400K) who get lots of notifications? Apparently not.

Another new Google user interface triumph of form over function!

–Lauren–

How YouTube’s User Interface Helps Perpetuate Hate Speech

UPDATE (6 May 2017): The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

Computer User Interface (UI) design is both an art and a science, and can have effects on users that go far beyond the interfaces themselves. As I’ve discussed previously, e.g. in The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes, user interfaces can unintentionally act as a form of discrimination against older users or other users with special needs.

But another user interface question arises in conjunction with the current debate about hate speech on Google’s YouTube (for background, please see What Google Needs to Do About YouTube Hate Speech and How Google’s YouTube Spreads Hate).

Specifically, can user interface design unintentionally help to spread and perpetuate hate speech? The answer may be an extremely disconcerting affirmative.

A key reason why I suspect that this is indeed the case, is the large numbers of YouTube users who have told me that they didn’t even realize that they had the ability to report hate speech to YouTube/Google. And when I’ve suggested that they do so, they often reply that they don’t see any obvious way to make such a report.

Over the years it’s become more and more popular to “hide” various UI elements in menus and/or behind increasingly obscure symbols and icons. And one key problem with this approach is obvious when you think about it: If a user doesn’t even know that an option exists, can we really expect them to play “UI scavenger hunt” in an attempt to find such an option? Even more to the point, what if it’s an option that you really need to see in order to even realize that the possibility exists — for example, of reporting a YouTube hate speech video or channel?

While YouTube suffers from this problem today, that wasn’t always the case. Here’s an old YouTube “watch page” desktop UI from years ago:

An Old YouTube User Interface

Not only is there a flag icon present on the main interface (rather than having the option buried in a “More” menu (and/or under generic vertical dots or horizontal lines), but the word “Flag” is even present on the main interface to serve as a direct signal to users that flagging videos is indeed an available option!

On the current YouTube desktop UI, you have to know to go digging under a “More” menu to find a similar “Report” option. And if you didn’t know that a Report option even existed, why would you necessarily go searching around for it in the first place? The only other YouTube page location where a user might consider reporting a hate speech video is through the small generic “Feedback” link at the very bottom of the watch page — and that can be way, way down there if the video has a lot of comments.

To be effective against hate speech, a flagging/reporting option needs to be present in an obvious location on the main UI, where users will see it and know that it exists. If it’s buried or hidden in any manner, vast numbers of users won’t even realize that they have the power to report hate speech videos to Google at all (the disappointing degree to which Google actually enforces their hate speech prohibitions in their Terms of Service, I’ve discussed in the posts linked earlier in this text).

You don’t need to be a UI expert to suspect one reason why Google over time has de-emphasized obvious flag/report links on the main interface, instead relegating them to a generic “More” menu. The easier the option is to see, the more people will tend to use it, both appropriately and inappropriately — and really dealing with those abuse reports in a serious manner can be expensive in terms of code and employees.

But that’s no longer an acceptable excuse — if it ever was. Google is losing major advertisers in droves, who are no longer willing to have their ads appear next to hate speech videos that shouldn’t even be monetized, and in many cases shouldn’t even be available on YouTube at all under the existing YouTube/Google Terms of Service.

For the sake of its users and of the company itself, Google must get a handle on this situation as quickly as possible. Making sure that users are actually encouraged to report hate speech and other inappropriate videos, and that Google treats those reports appropriately and with a no-nonsense application of their own Terms of Service, are absolutely paramount.

–Lauren–

What Google Needs to Do About YouTube Hate Speech

In the four days since I wrote How Google’s YouTube Spreads Hate, where I discussed both how much I enjoyed and respected YouTube, and how unacceptable their handling of hate speech has become, a boycott by advertisers of YouTube and Google ad networks has been spreading rapidly, with some of the biggest advertisers on the planet pulling their ads over concerns about being associated with videos containing hate speech, extremist, or related content.

It’s turned into a big news story around the globe, and has certainly gotten Google’s attention.

Google has announced some changes and apparently more are in the pipeline, so far relating mostly to making it easier for advertisers to avoid having their ads appear with those sorts of content.

But let’s be very clear about this. Most of that content, much of which is on long-established YouTube channels sometimes with vast numbers of views, shouldn’t be permitted to monetize at all. And in many cases, shouldn’t be permitted on YouTube at all (by the way, it’s a common ploy for YT uploaders to ask for support via third-party sites as a mechanism to evade YT monetization disablement).

The YouTube page regarding hate speech is utterly explicit:

We encourage free speech and try to defend your right to express unpopular points of view, but we don’t permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

race or ethnic origin
religion
disability
gender
age
veteran status
sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their ethnicity.

Seems pretty clear. But in fact, YouTube is awash with racist, antisemitic, and a vast array of other videos that without question violate these terms, many on established, cross-linked YouTube channels containing nothing but such materials.

How easy is it to stumble into such garbage?

Well, for me here in the USA, the top organic (non-ad) YouTube search result for “blacks” is a video showing a car being wrecked with the title: “How Savage Are Blacks In America & Why Is Everyone Afraid To Discuss It?” — including the description “ban niggaz not guns” — and also featuring a plea to donate to a racist external site.

This video has been on YouTube for over a year and has accumulated over 1.5 million views. Hardly hiding.

While it can certainly can be legitimately argued that there are many gray areas when it comes to speech, on YouTube there are seemingly endless lists of videos that are trivially located and clearly racist, antisemitic, or in violation of YouTube hate speech terms in other ways.

And YouTube helps you find even more of them! On the right-hand suggestion panel right now for the video I mentioned above, there’s a whole list of additional racist videos, including titles like: “Why Are So Many Of U Broke, Black, B!tches Begging If They Are So Strong & Independent?” — and much worse.

Google’s proper course is clear. They must strongly enforce their own Terms of Service. It’s not enough to provide control over ads, or even ending those ads entirely. Videos and channels that are in obvious violation of the YT TOS must be removed.

We have crossed the Rubicon in terms of the Internet’s impact on society, and laissez-faire attitudes toward hate speech content are now intolerable. The world is becoming saturated in escalating hate speech and related attacks, and even tacit acceptance of these horrors — whether spread on YouTube or by the Trump White House — must be roundly and soundly condemned.

Google is a great company with great people. Now they need to grasp the nettle and do the right thing.

–Lauren–

How Google’s YouTube Spreads Hate

I am one of YouTube’s biggest fans. Seriously. It’s painful for me to imagine a world now without YouTube, without the ability to both purposely find and serendipitously discover all manner of contemporary and historical video gems. I subscribe to YouTube Red because I want to help support great YT creators (it’s an excellent value, by the way).

YouTube is perhaps the quintessential example of a nexus where virtually the entire gamut of Internet policy issues meet and mix — content creation, copyrights, fair use, government censorship, and a vast number more are in play.

The scale and technology of YouTube is nothing short of staggering, and the work required to keep it all running — in terms of both infrastructure and evolving policies, is immense. When I was consulting to Google several years ago, I saw much of this firsthand, as well as having the opportunity to meet many of the excellent people behind the scenes.

Does YouTube have problems? Of course. It would be impossible for an operation of such scope to exist without problems. What we really care about in the long run is how those problems are dealt with.

There is a continual tension between entities claiming copyrights on material and YouTube uploaders. I’ve discussed this in considerable detail in the past, so I won’t get into it again here, other than to note that it’s very easy for relatively minor claimed violations (whether actually accurate or not) to result in ordinary YouTube users having their YouTube accounts forcibly closed, without effective recourse in many cases. And while YouTube has indeed improved their appeal mechanisms in this regard over time, they still have a long way to go in terms of overall fairness.

But a far more serious problem area with YouTube has been in the news repeatedly lately — the extent to which hate speech has permeated the YouTube ecosystem, even though hate speech on YouTube is explicitly banned by Google in the terms of use on this YouTube help page.

Before proceeding, let’s set down some hopefully useful parameters to help explain what I’m talking about here.

One issue that we need to clarify at the outset. The First Amendment to the United States Constitution does not require that YouTube or any other business provide a platform for the dissemination, monetization, or spread of any particular form of speech. The First Amendment applies only to governmental restrictions on speech, which are the true meaning of the term censorship. This is why concepts such as the horrific “Right To Be Forgotten” are utterly unacceptable, as they impose governmentally enforced third-party censorship onto search results.

It’s also often suggested that it’s impossible to really identify hate speech because — some observers argue — everyone’s idea of hate speech is different. Yet from the standpoint of civilized society, we can see that this argument is largely a subterfuge.

For while there are indeed gray areas of speech where even attempting to assign such a label would be foolhardy, there are also areas of discourse where not assigning the hate speech label would require inane and utterly unjustifiable contortions of reality.

Videos from terrorist groups explicitly promoting violence are an obvious example. These are universally viewed as hate speech by all civilized people, and to their credit the major platforms like YouTube, Facebook, et al. have been increasingly leveraging advanced technology to block them, even at the enormous “whack-a-mole” scales at which they’re uploaded.

But now we move on to other varieties of hate speech that have contaminated YouTube and other firms. And while they’re not usually as explicitly violent as terrorist videos, they’re likely even more destructive to society in the long run, with their pervasive nature now even penetrating to the depths of the White House.

Before the rise of video and social media platforms on the Internet, we all knew that vile racists and antisemites existed, but without effective means to organize they tended to be restricted to their caves in Idaho or their Klan clubhouses in the Deep South. With only mimeograph and copy machines available to perpetuate their postal-distributed raving-infested newsletters, their influence was mercifully limited.

The Internet changed all that, by creating wholly new communications channels that permitted these depraved personalities to coordinate and disseminate in ways that are orders of magnitude more effective, and so vastly increasing the dangers that they represent to decent human beings.

Books could be written about the entire scope of this contamination, but this post is about YouTube’s role, so let’s return to that now.

In recent weeks the global media spotlight has repeatedly shined on Google’s direct financial involvement with established hate speech channels on YouTube.

First came the PewDiePie controversy. As YouTube’s most-subscribed star, his continuing dabbling in antisemitic videos — which he insists are just “jokes” even as his Hitler-worship continues — exposed YouTube’s intertwining with such behavior to an extent that Google found itself in a significant public relations mess. This forced Google to take some limited enforcement actions against his YouTube channel. Yet the channel is still up on YouTube. And still monetizing.

Google is in something of a bind here. Having created this jerk, who now represents a significant income stream to himself and the company, it would be difficult to publicly admit that his style of hate is still exceedingly dangerous, as it helps to normalize such sickening concepts. This is true even if we accept for the sake of the argument that he actually means it in a purely “joking” way (I don’t personally believe that this is actually the case, however). For historical precedent, one need only look at how the antisemitic “jokes” in 1930s Germany became a springboard to global horror.

But let’s face it, Google really doesn’t want to give up that income stream by completely demonetizing PewDiePie or removing his channels completely, nor do they want to trigger his army of obscene and juvenile moronic trolls and a possible backlash against YouTube or Google more broadly.

Yet from an ethical standpoint these are precisely the sorts of actions that Google should be taking, since — as I mentioned above — “ordinary” YouTube users routinely can lose their monetization privileges — or be thrown off of YouTube completely — for even relatively minor accused violations of the YouTube or Google Terms of Service.

There’s worse of course. If we term PewDiePie’s trash as relatively “soft” hate speech, we then must look to the even more serious hate speech that also consumes significant portions of YouTube.

I’m not going to give any of these fiends any “link juice” by naming them here. But it’s trivial to find nearly limitless arrays of horrible hate speech videos on YouTube under the names of both major and minor figures in the historical and contemporary racist/antisemitic/alt-right movements.

A truly disturbing aspect is that once you find your way into this depraved area of YouTube, you discover that many of these videos are fully monetized, meaning that Google is actually helping to fund this evil — and is profiting from it.

Perhaps equally awful, if you hit one of these videos’ watch pages, YouTube’s highly capable suggestion engine will offer you a continuous recommended stream of similar hate videos over on the right-hand side of the page — even helpfully surfacing additional hate speech channels for your enjoyment. I assume that if you watched enough of these, the suggestion panels on the YouTube home page would also feature these videos for you.

Google’s involvement with such YouTube channels became significant news over the last couple of weeks, as major entities in the United Kingdom angrily pulled their advertising after finding it featured on the channels of these depraved hatemongers. Google quickly announced that they’d provide advertisers with more controls to help avoid this in the future, but this implicitly suggests that Google doesn’t plan actions against the channels themselves, and Google’s “we don’t always get it right” excuse is wearing very, very thin given the seriousness of the situation.

Even if we completely inappropriately consider such hate speech to be under the umbrella of acceptable speech, what we see on YouTube today in this context is not merely providing a “simple” platform for hate speech — it’s providing financial resources for hate speech organizations, and directly helping to spread their messages of hate.

I explicitly assume that this has not been Google’s intention per se. Google has tried to take a “hands off” attitude toward “judging” YouTube videos as much as possible. But the massive rise in hate-based speech and attacks around the world, including (at least tacitly) to the highest levels of the U.S. federal government under the Trump administration, are clear and decisive signals that this is no longer a viable course for an ethical and great company like Google.

It’s time for Google to extricate YouTube from its role as a partner in hate. That this won’t come without significant pain and costs is a given.

But it’s absolutely the correct path for Google to take — and we expect no less from Google.

–Lauren–

Google and Older Users

Alphabet/Google needs at least one employee dedicated to vetting their products on a continuing basis for usability by older users — an important and rapidly growing demographic of users who are increasingly dependent on Google services in their daily lives.

I’m not talking here about accessibility in general, I’m talking about someone whose job is specifically to make sure that Google’s services don’t leave older users behind due to user interface and/or other associated issues. Otherwise, Google is essentially behaving in a discriminatory manner, and the last thing that I or they should want to see is the government stepping in (via the ADA or other routes) to mandate changes.

–Lauren–

“Google Experiences” Submission Page Now Available

Recently in Please Tell Me Your Google Experiences For “Google 2017” Report, I solicited experiences with Google — positive, negative, neutral, or whatever — for my upcoming “Google 2017” white paper report.

The response level has been very high and has led me to create a shared, public Google Doc to help organize such submissions.

Please visit the Google Experiences Suggestions Page to access that document, through which you may submit suggested text and/or other information. You do not need to be logged into a Google account to do this.

Thanks again very much for your participation in this effort!

–Lauren–

Simple Solutions to “Smart TVs” as CIA Spies

I’m being bombarded with queries about Samsung “Smart TVs” being used as bugs by the CIA, as discussed in the new WikiLeaks data dump.

I’m not in a position to write up anything lengthy about this right now, but there is a simple solution to the entire “smart TV as bug” category of concerns — don’t buy those TVs, and if you have one, don’t connect it to the Internet directly.

Don’t associate it with your Wi-Fi network — don’t plug it into your Ethernet.

Buy a Chromecast or Roku or similar dongle that will provide your Internet programming connectivity via HDMI to that television — these dongles don’t include microphones and are dirt cheap compared to the price of the TV itself.

In general, so-called smart TVs are not a good buy even when they’re not acting as bugs.

Now, seriously paranoid readers might ask “Well, what if the spooks are subverting both my smart TV and my external dongle? Couldn’t they somehow route the audio from the TV microphone back out to the Internet through hacked firmware in the dongles?”

The answer is theoretically yes, but it’s a significantly tougher lift for a number of technical reasons. The solution though even for that scenario is simple — kill the power to the dongle when you’re not using it.

Unplug it from the TV USB jack if you’re powering it that way (I mean, if you’re paranoid, you might consider the possibility that the hacked TV firmware is still supplying power to the dongle even when it’s supposed to be off, and that the dongle has been hacked to not light its power LED in that situation, eh?)

But if you’re powering the dongle from a wall adapter, and you unplug that, you’ve pretty much ended that ballgame.

–Lauren–

Google’s New “YouTube TV” Is a Gift to Donald Trump

As if it wasn’t bad enough that so many high-ranking Google search results were hijacked by criminals monetizing false news stories toward getting Donald Trump elected, it appears that (for the moment at least), Google’s new “YouTube TV” offering is a gift package for serial lying sociopath Donald Trump and his vile supporters.

YouTube Live is Google’s newly announced attempt to push cable “cord cutting” — that is, encouraging people to drop their conventional cable or satellite TV subscriptions, and switch to viewing Internet-delivered streams.

The YouTube Live offering seems fairly conventional at first glance and Google has tossed in useful stuff like multiple users and free time-shifting/DVR capabilities.

But a glaring omission from their channel lineup makes YouTube Live a massive prize package for Donald Trump and his fascist agenda — FOX “News” is included in the lineup, but CNN is nowhere to be found. Go ahead, try and find it. I sure can’t.

It appears that Google is hoping that viewers will accept MSNBC as a substitute for CNN — but that’s ridiculous in the extreme. Not including CNN is giving FOX “News” an enormous boost, and those right-wing News Corp. bastards have already done enough damage to this country without Google giving FOX and Trump this additional big wet kiss squarely on their rotting lips.

No doubt Google will say that they couldn’t reach a licensing agreement with CNN/Time Warner and golly gee we hope to add them onto the lineup soon.

To hell with that. How long will it be before FOX and Trump are ranting claims that Google chose FOX “News” because Google doesn’t trust CNN? Launching this service including FOX “News” but not including CNN is the height of irresponsibility, especially in today’s political environment.

Shame on you Google. Shame on you.

–Lauren–

Meet the Guys: The Jerks of Computer Science

UPDATE (August 9, 2017):  Here’s My Own Damned “Google Manifesto”

UPDATE (August 7, 2017):  Audio from My Radio Discussion About the Leaked Google “Diversity” Manifesto Controversy

– – –

Originally posted July 16, 2013.

A perennial question in Computer Science has nothing directly to do with code or algorithms, and everything to do with people. To wit: Why don’t more women choose CS as a career path?

As a guy who has spent his entire professional career in CS and related policy arenas, this skewing has been obvious to me pretty much since day one.

It’s not restricted to educational institutions and the workplace, it’s also on display at trade shows, technical conferences, and even on social networking sites of all stripes.

And despite the efforts of major firms to draw more women into this field with some relatively limited successes, the overall problem still persists.

All sorts of theories have been postulated for why women tend to avoid CS and the related computer technology fields, ranging from “different nurturing patterns” to inept school guidance counselors.

But I suspect there’s an even more basic reason, that women tend to detect quickly and decisively.

The men of computer science and the computer industry are misogynous jerks.

Not all of them of course. Likely not even the majority.

But enough to thoroughly poison the well.

This goes far beyond guys crudely hitting on women at conferences, or the continuing presence of humiliating “booth babes” at trade shows.

The depth to which this pervades has been especially on painful display on the Web over the last couple of days, relating to a very important operating system technical discussion list.

Since I don’t want this to be about individuals, we’ll call the person at the focus of this list by the label “Q” — after the supercilious, intelligent, arrogant, omnipotent character from the “Star Trek” universe. Not evil per se — in fact capable of great constructive work — but most folks who come in contact with him are unwilling to risk the wrath of such a powerful entity. Indeed, an interesting character this Q.

Back here in what we assume is the real world, the current controversy was triggered when a female member of that technical discussion list publicly criticized “Q” and what we’ll politely call his “boorish” statements on the list — causing at least one observer to note that it was the first time they’d seen anyone stand up to Q that way in 20 years. This woman — by the way — is the formal representative to the list in question from an extremely important and major firm whose technology is at the heart of most personal computers in use today.

The particular examples she cited were by no means the most illustrative available — aficionados of the list in question realize she was showing admirable diplomatic tact.

But while reactions to her statements in the associated list thread itself can certainly be described as interesting, many of the reactions that have appeared externally in social media can only be described as vomit inducing.

I can’t even repeat many of them here, but just a sampling I’ve seen and/or directly received:

– “Nobody told her she had to work with Linux, get off the list!”
– “What is she, a slave? She doesn’t have to be there!”
– “Q is a god! He’s done so much good he can say or do anything else he wants, he can walk across your burned corpses!”
– “People should be able to say anything they want any way they want. If you can’t take it, go somewhere else.”
– “Bring her over to my house and I’ll show her what bad behavior is really about!”
– “Somebody is always going to be offended by everything, so there’s no point to even trying to be polite.”
– “She’s just having PMS and snapped!”
– “Hey, it’s not so bad on the list, it’s just good ol’ boys playing South Park! We don’t want political correctness here. Tell her to go – – – – herself, or ask me over and I’ll do it for her!”

And a wide variety of other specifically crude, sexist, and toilet humor remarks of all sorts, plus much worse.

It was getting so bad that I had to shut down comments on two discussion threads last night before going to bed to avoid their turning into rancid cesspools in my absence — and I wasn’t the only one who had to take that action.

One might argue that all this isn’t unique to computer science and the broader computer industry, and you’d be correct. This kind of “boys will be boys” sexism pervades our culture and in fact has driven many women into refusing to even identify as female in social media or discussion lists at all.

But the “it’s not really important, and everybody’s doing it anyway!” excuse is utterly bogus.

While we may not be able to change these attitudes in the culture at large, we can at least take steps to clean up our own house, to try bring a basic level of civility to our own work in these regards.

But first we need to admit that the status quo is indeed unacceptable, and many in our community’s “good ol’ boys club” are currently refusing even to go that far.

The technical and policy issues we’re dealing with are far too crucial to permit them to be distorted by juvenile, sexist, and loutish behavior that discourages maximum practicable inclusion and participation.

And rather than acting as tacit examples of bullying that help feed even worse abuses, leaders in our technical community should be taking the responsibility to be examples in public — if not of exemplary behavior — at least of basic politeness.

If people want to be jerks in their private lives, that’s up to them. But keep your bad behavior and sexist crap out of our work.

And that goes for you, me, Q, and everyone else as well.

–Lauren–

Please Tell Me Your Google Experiences For “Google 2017” Report

Executive Summary: Please tell me your Google Experiences for my upcoming “Google 2017″ report, via email to:

google@vortex.com

I believe that it’s obvious to pretty much everyone that we’ve now entered a new era of major Internet-related companies directly and indirectly impacting political processes and other aspects of our lives in ways that — frankly, to say the least — were not widely anticipated by most observers. So understanding where things stand these days with these firms is paramount, in terms of their own operations, and their impacts on their users and the world in general.

For many years the most common category of questions and comments that I receive relate one way or another to Google (while I have consulted to Google in the past, I am not currently doing so). So I’ve now begun work on what I’m tentatively calling “Google 2017” — a report (or “white paper” if you prefer) discussing the perceived overall state of Google (and its parent corporation Alphabet, Inc.) in relation to the sorts of issues that I noted above and other relevant related topics.

As part of this effort, I’d very much appreciate your emailing me your own noteworthy experiences with Google (and Alphabet). Good — bad — exemplary — abysmal — confused — resolved — pending — fantastic — or otherwise rising to the level that you feel could usefully contribute to a better understanding of Google and Alphabet overall.

Whether involving specific Google services (including everything from Search to Gmail to YouTube and beyond), accounts, privacy, security, interactions, legal or copyright issues — essentially anything positive, negative, or neutral that you are free to impart to me, that you believe might be of interest.

I would like to keep this report focused on relatively recent experiences and observations, so events that took place years ago that aren’t any longer particularly relevant are frankly of lesser use to me right now.

Your identity will be considered confidential, and any information that you send to me will also be considered confidential in the details — unless you specifically indicate otherwise. That is, I will use your information toward the effort’s reported aggregate analysis, and any of your specific examples or other data that you provide — that I might include in the report as illustrative examples — will be carefully anonymized, unless you give me permission to do otherwise. If you don’t want me to use your examples at all even anonymized, please let me know and that will be respected of course.

Please send anything meeting the criteria above that you feel comfortable sharing with me to:

google@vortex.com

I’ll keep you informed of my progress. Thanks very much!

Be seeing you.

–Lauren–

Don’t (For Now) Use Google’s New “Perspective” Comment Filtering Tool

I must be brief today, so I’ll keep this relatively short and get into details in another post. Google has announced (with considerable fanfare) public access to their new “Perspective” comment filtering system API, which uses Google’s machine learning/AI system to determine which comments on a site shouldn’t be displayed due to perceived high spam/toxicity scores. It’s a fascinating effort. And if you run a website that supports comments, I urge you not to put this Google service into production, at least for now.

The bottom line is that I view Google’s spam detection systems as currently too prone to false positives — thereby enabling a form of algorithm-driven “censorship” (for lack of a better word in this specific context) — especially by “lazy” sites that might accept Google’s determinations of comment scoring as gospel.

In fact, Google’s track record in this context remains problematic.

You can see this even from the examples that Google provides, where it’s obvious that any given human might easily disagree with Google’s machine-driven comment ranking decisions.

And as someone who deals with significant numbers of comments filtered by Google every day — I have nearly 400K followers on Google+ — I can tell you with considerable confidence that the problem isn’t “spam” comments that are being missed, it’s completely legitimate non-spam, nontoxic comments that are inappropriately marked as spam and hidden by Google.

Every day, I plow through lots of these (Google makes them relatively difficult to find and see), so that I can “resurface” completely reasonable comments from good people who have been marked as toxic spammers by Google spam detection false positives.

This is a bad situation, and widespread use of “Perspective” at this stage of its development would likely spread this problem around the world.

For in fact, much worse than letting a spam or toxic comment through, is the AI-based muzzling of a comment and commenter who was completely innocent and falsely condemned by the machine, where a human would not have done so.

“Vanishing” of innocent, legit comments through overaggressive algorithms can lead to misunderstandings, confusion, and a general lack of trust in AI systems — and this kind of trust failure can be dangerous for users and the industry, since AI’s potential for greatness toward improving our world is indeed very real.

I’ll have more to say about this later, but for now, while you should of course feel free to experiment with the Google Perspective API, I urge you not to deploy it to any running production systems at this time.

Be seeing you.

–Lauren–

Does Google Hate Old People?

Originally posted February 11, 2016. Reposted today after a weekend of struggling to support a variety of older and not-so-old users via Chrome Remote Desktop.

– – –

No. Google doesn’t hate old people. I know Google well enough to be pretty damned sure about that.

Is Google “indifferent” to old people? Does Google simply not appreciate, or somehow devalue, the needs of older users?

Those are much tougher calls.

I’ve written a lot in the past about accessibility and user interfaces. And today I’m feeling pretty frustrated about these topics. So if some sort of noxious green fluid starts to bubble out from your screen, I apologize in advance.

What is old, anyway? Or we can use the currently more popular term “elderly” if you prefer — six of one and half a dozen of another, really.

There are a bunch of references to “not wanting to get old” in the lyrics of famous rock stars who are now themselves of rather advanced ages. And we hear all the time that “50 is the new 30” or “70 is the new 50” or … whatever.

The bottom line is that we either age or die.

And the popular view of “elderly” people sitting around staring at the walls — and so rather easily ignored — is increasingly a false one. More and more we find active users of computers and Internet services well into their 80s and 90s. In email and social media, many of them are clearly far more intelligent and coherent than large swaths of users a third their age.

That’s not to say these older users don’t have issues to deal with that younger persons don’t. Vision and motor skill problems are common. So is the specter of memory loss (that actually begins by the time we reach age 20, then increases from that point onward for most of us).

Yet an irony is that computers and Internet services can serve as aids in all these areas. I’ve written in the past of mobile phones being saviors as we age, for example by providing an instantly available form of extended memory.

But we also are forced to acknowledge that most Internet services still only serve older persons’ needs seemingly begrudgingly, failing to fully comprehend how changing demographics are pushing an ever larger proportion of their total users into that category — both here in the U.S. and in many other countries.

So it’s painful to see Google dropping the ball in some of these areas (and to be clear, while I have the most experience with the Google aspects of these problems, these are actually industry-wide issues, by no means restricted to Google).

This is difficult to put succinctly. Over time these concerns have intertwined and combined in ways increasingly cumbersome to tease apart with precision. But if you’ve every tried to provide computer/Internet technical support to an older friend or relative, you’ll probably recognize this picture pretty quickly.

I’m no spring chicken myself. But I remotely provide tech support to a number of persons significantly older — some in their 80s, and more than one well into their 90s.

And while I bitch about poor font contrast and wasted screen real estate, the technical problems of those older users are typically of a far more complex nature.

They have even more trouble with those fonts. They have motor skill issues making the use of common user interfaces difficult or in some cases impossible. Desktop interfaces that seem to be an afterthought of popular “mobile first” interface designs can be especially cumbersome for them. They can forget their passwords and be unable to follow recovery procedures successfully, often creating enormous frustration and even more complications when they try to solve the problems by themselves. The level of technical lingo thrown at them in many such instances — that services seem to assume everyone just knows — only frustrates them more. And so on.

But access to the Net is absolutely crucial for so many of these older users. It’s not just accessing financial and utility sites that pretty much everyone now depends upon, it’s staying active and in touch with friends and relatives and others, especially if they’re not physically nearby and their own mobility is limited.

Keeping that connectivity going for these users can involve a number of compromises that we can all agree are not keeping with ideal or “pure” security practices, but are realistic necessities in some cases nonetheless.

So it’s often a fact of life that elderly users will use their “trusted support” person as the custodian of their recovery and two-factor addresses, and of their primary login credentials as well.

And to those readers who scream, “No! You must never, ever share your login credentials with anyone!” — I wish you luck supporting a 93-year-old user across the country without those credentials. Perhaps you’re a god with such skills. I’m not.

Because I’ve written about this kind of stuff so frequently, you may by now be suspecting that a particular incident has fired me off today.

You’d be correct. I’ve been arguing publicly with a Google program manager and some others on a Chrome bug thread, regarding the lack of persistent connection capability for Chromebooks and Chromeboxes in the otherwise excellent Chrome Remote Desktop system — a feature that the Windows version of CRD has long possessed.

Painfully, from my perspective the conversation has rapidly degenerated into my arguing against the notion that “it’s better to flush some users down the toilet than violate principles of security purity.”

I prefer to assume that the arrogance suggested by the “security purity” view is one based on ignorance and lack of experience with users in need, rather than any inherent hatred of the elderly.

In fact, getting back to the title of this posting, I’m sure hatred isn’t in play.

But of course whether it’s hatred or ignorance — or something else entirely — doesn’t help these users.

The Chrome OS situation is particularly ironic for me, since these are older users whom I specifically urged to move to Chrome when their Windows systems were failing, while assuring them that Chrome would be a more convenient and stable experience for them.

Unfortunately, these apparently intentional limitations in the Chrome version of CRD — vis-a-vis the Windows version — have been a source of unending frustration for these users, as they often struggle to find, enable, and execute the Chrome version manually every time they need help from me, and then are understandably upset that they have to sit there and refresh the connection manually every 10 minutes to keep it going. They keep asking me why I told them to leave Windows and why I can’t fix these access problems that are so confusing to them. It’s personally embarrassing to me.

Here’s arguably the saddest part of all. If I were the average user who didn’t have a clue of how Google’s internal culture works and of what great people Googlers are, it would be easy to just mumble something like, “What do you expect? All those big companies are the same, they just don’t care.”

But that isn’t the Google I know, and so it’s even more frustrating to me to see these unnecessary problems continuing to persist and fester in the Google ecosystem, when I know for a certainty that Google has the capability and resources to do so much better in these areas.

And that’s the truth.

–Lauren–

Here’s Where Google Hid the SSL Certificate Information That You May Need

UPDATE (December 2, 2017): Easy Access to SSL Certificate Information Is Returning to Google’s Chrome Browser

– – –

Google has a great security team, so it’s something of a head-scratcher when they misfire. Or should we be wondering if the Chrome user interface crew had enough coffee lately?

Either way, Google Chrome users have been contacting me wondering why they no longer could access the detailed status of Chrome https: connections, or view the organization and other data associated with SSL certificates for those connections.

Up to now for the stable version of Chrome, you simply clicked the little green padlock icon on an https: connection, clicked on the “Details” link that appeared, and a panel then opened that gave you that status, along with an obvious button to click for viewing the actual certificate data such as Organization, issuance and expiration dates, etc.

Suddenly, that “Details” link no longer is present. Seemingly, Google just doesn’t feel that “ordinary” users need to look at that data these days.

I beg to differ. I’ve frequently trained “ordinary” users to check that information when they question the authenticity of an https: connection — after all, crooks can get SSL certificates too, so verifying the certificate issuance-related data often makes sense.

Well, it turns out that you can still get this information from Chrome, but apparently Google now assumes that folks are so clairvoyant that they can figure out how to do this through the process of osmosis — or something.

The full certificate data is available from the “Developers tools” panel under the “Security” label. In fact, that’s where this info has been for quite some time, but since the now missing “Details” link took you directly to that panel, most users probably didn’t even realize that they were deep in the Developers tools section of the browser.

To get the certificate data now, here’s what you need to do. 

First, get into Developer tools. You can do this via Chrome’s upper-right three vertical dots, then click “More tools” — then “Developer tools” — or on many systems you can just press the F12 button.

But wait, there’s still more (yeah, Google took a simple click in an intuitive place and replaced it with a bunch of clicks scattered around).

Once the panel opens, look up at its top. If you don’t see the word “Security” already, click on the “>>” to the right of “Console” — then look down the list that appears and click on “Security” — which will open the Security panel with all of the certificate-related goodies. When you’re done there, click the big “X” in the upper right of the panel to return to normal browser operations.

And don’t feel too badly if you didn’t figure all of this out for yourself. Even Houdini might have had problems with this one.

–Lauren–

The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes

I hate writing blog posts like this. I really do. I’m a big fan of Google. They’ve got many of the most skilled and caring employees in tech. Unfortunately, they’re not immune to being caught up in abysmal industry trends, so I’m forced to write another “Here we go again …” piece. Sigh.

I’ve been using Google Voice since pretty much the day it launched. Over the years since then I’ve come to depend upon it for both my personal and business phone calls inbound and outbound. Google Voice has been extremely functional, utterly reliable, and godsend for people like me who must deal with complex mixes of cellular and landline phones, lots of inbound spam calls to burn, and need this level of call management to help free up the time necessary for making inflammatory Google+ posts. That Google Voice is free for all domestic calls is a bonus, but I’d willingly pay reasonable fees to use it.

The Google Voice (henceforth “GV”) desktop/web interface has been very stable for something like five years now. In one sense that’s a good thing. It works well, it accomplishes its purpose. Excellent.

On the other hand, if you know Google, you know that when one of their products doesn’t seem to be updated much, it might be time to start being afraid. Very afraid. Because Google products that seem “too” stable may be on the path to decimation and death.

Let’s face it, an ongoing problem in the Internet world is that skilled software engineers by and large aren’t enthusiastic about maintaining what are seen to be “old” products. It’s not considered conducive to climbing the promotion ladder at most firms — the “sexy” new stuff is where the bigger bucks are perceived to reside.

So as desktop GV continued along its stable path, many observers began to wonder if Google was preparing to pull its plug. I’ve had those concerns too, though somewhat mitigated by the fact that Google has been integrating aspects of GV into some of their other newer products, which suggested that GV still had significant life ahead.

This was confirmed recently when word started to circulate of a new version (“refresh” is another term used for this) of GV that was soon to roll out to users. Google eventually confirmed this. Indeed, it’s rolling out right now.

And for desktop users at least, it’s a nightmare. A nightmare that in fact I was expecting. I had hoped I’d be wrong. Unfortunately, I was correct.

I probably don’t even really need to describe the details, because you’ve likely seen this happen to other Google products of late (including recently Google Wallet, though the impact of GV is orders of magnitude worse for users who need to interact with GV frequently throughout the day).

Once again, Google is on the march to treat large desktop displays as if they were small smartphone screens.

Legacy GV made excellent use of screen space — making it easy to see all call details, full voicemail transcriptions, and everything else you needed — all in clear and easy to read fonts.

The new GV is another wasted space, low contrast slap in the face of desktop users, especially those with less than perfect vision (whether due to naturally aging eyes or any other reason).

Massive amounts of unused white space. Call histories squished into a little smartphone style column (no way to increase its size that I could find so far), causing visible voicemail transcriptions to be truncated to just a few words. Plus we’re “treated” to the new Google standard low contrast “if you don’t have perfect vision we don’t care about you” fonts, that disrupt the entire user interface when you try to zoom them up.

And so on. Need I say more? You already know the drill.

There is one saving grace in the new desktop GV. For the moment, there’s a link that takes you back to legacy GV. In fact, after reverting one of my accounts that way, I didn’t even see an obvious way to get back to the new GV interface. In any case, we can safely assume that the legacy access is only temporary.

Compared to legacy desktop GV that worked great, the new GV is another painful sign that Google just doesn’t care about users who don’t live 100% of the time on smartphones and/or have perfect vision. Yet this maligned demographic is rapidly growing in size.

It’s increasingly difficult to not consider the end results of these changes in Google products to be a form of discrimination. I don’t believe that they’re actually intended as discrimination — but the outcomes are the same irrespective of motives. And frankly, my view is that in the long run this is a very dangerous and potentially self-destructive path for Google to be taking.

Nobody would demand that innovation and product improvements must stop. But we are far beyond the point where we should have come to the realization that “one size fits all” user interfaces are simply no longer tenable in these environments, unless you’re willing to simply write off large numbers of users who may not be in your primary target demographic, but still represent many millions of human beings who depend upon you.

Ignoring the needs of these users is not right. It’s not fair. It’s not ethical.

It’s just not Googley. Or at least, it shouldn’t be.

–Lauren–

User Trust Fail: Google Chrome and the Tech Support Scams

I act as the volunteer support structure for a significant number of nontechnical — but quite active — Internet users. Some of these are quite elderly, which makes me quite sensitive to where Internet firms are falling down on the job in this context. 

Let’s face it, these firms may pay lip service to accessibility and serving all segments of their users, but in reality they typically tend to care very little about users who aren’t in their key sales demographics, and who (while often numbering in the millions or more) aren’t considered to be their “primary” users of interest.

We see this problem across a number of aspects (I’ve in the past frequently noted the problems of illegible fonts and poor user interface designs, as my regular readers well know).

But today I’d like to focus on just one, where Google really needs to more aggressively protect their users from some of the most dangerous criminals on the Internet.

I’m referring to the ubiquitous “tech support” scams (often based in India) that terrify users by appearing on their browsers — often the result of a contaminated site link, a “cold” phone call, or very often a mistyped URL — who then falsely claim that the user’s computer is infected with malware or somehow broken, that you must click HERE for a fix, or you must immediately call THIS 800 number, and BLAH BLAH BLAH.

The vast majority of these follow a common pattern, usually claiming to be a legit tech support firm or often Microsoft itself. 

Once users are pushed into contacting the scammers — who typically focus on Windows computers — the usual pattern is for them to walk the unsuspecting user through the installation of a remote access program, so that the scammer has free reign to suck the user’s credit card and bank accounts dry via a variety of crooked procedures. Their methods are typically tuned especially well to take advantage of elderly, nontechnical users.

It’s not Google’s fault that these criminals exist. However, given Google’s excellent record at detection and blocking of malware, it is beyond puzzling why Google’s Chrome browser is so ineffective at blocking or even warning about these horrific tech support scams when they hit a user’s browser.

These scam pages should not require massive AI power for Google to target.

And critically, it’s difficult to understand why Chrome still permits most of these crooked pages to completely lock up the user’s browser — often making it impossible for the user to close the related tab or browser through most ordinary means that most users could reasonably be expected to know about.

The simplest cure to offer in these situations (especially when you’re trying to help someone on the other side of the country over the phone) is to tell them to reboot (if the user isn’t already so flustered that they’re having trouble doing that) or to power cycle the computer completely (with the non-zero risk of disk issues that can result from sudden shutdowns). 

Even after that, users need to know that they must refuse Chrome’s “helpful” offer of restoring the old tabs after the reset — otherwise they can easily find themselves locked into the offending page yet again!

Chrome is now the world’s most popular browser, and Google’s Chrome team is top-notch. I am confident that they could relatively quickly solve these problems, if they deemed it a priority to do so.

For the sake of helping to protect their users from support scams — even though these users are often in demographic categories that Google doesn’t seem to really care that much about — I urge Google to take immediate steps to make it much more difficult for the tech support criminals to leverage the excellent Chrome browser for evil purposes.

–Lauren–

The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

IETF’s Stunning Announcement: Emergency Transition to IPv7 Is Necessary!

Frostbite Falls, Minn. (NOTAP) In a brief announcement today that stunned Internet users around the world, the Internet Engineering Technical Force proclaimed the need for an “emergency” transition to a yet to be designed “IP version 7” protocol, capable of dealing with numeric values up to “a full gazillion at a minimum.”

IETF spokesman David Seville explained why this drastic move was considered necessarily when the ongoing transition from IPv4 to Internet protocol level IPv6 — the latter with a vast numbering capability — is still far from complete.

“Frankly, we’re just trying to get ahead of the curve, for once in the technology field,” said Mr. Seville. “With the dramatic rise in the number of hate speech and fake news sites around the world — not only originating in the Soviet Uni … I mean, Russia — we can’t risk running out of numbering resources ever again! Everyone deserves to be able to get these numbers, no matter how vile, racist, and sociopathic they may be. We’re already getting complaints regarding software systems that have overflowed available variable ranges simply trying to keep track of Donald Trump’s lies.”

Asked how the IETF planned to finance their outreach regarding this effort, Seville suggested that they were considering buying major ad network impressions on racist fake news sites like Breitbart, where “the most gullible Internet users tend to hang out. If anyone will believe the nonsense we’re peddling, they will!”

In answer to a question regarding the timing of this proposed transition, Seville noted that the IETF planned to follow the GOP’s healthcare leadership style. “We feel that IPv4 and IPv6 should be immediately repealed, and then we can come up with the IPv7 replacement later.” When asked if this might be disruptive to the communications of Internet users around the world, Mr. Seville chuckled “You’re catching on.”

David Seville can be reached directly for more information at his voice phone number: +7 (495) 697-0349.

– – –

–Lauren–

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

My Mock-Up for Labeling Fake News on Google Search

Here is my mock-up of one way to label fake news on Google Search Results Pages, in the style of the Google malware site warnings. The warning label link would go to a help page explaining the methodology of the labeling.

 

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!