“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today

Views: 766

Originally posted February 27 and 28, 2009:
Google’s “Failure to Communicate” vs. User Support
Google Ombudsman (Part II)

Greetings. There’s been a lot of buzz around the Net recently about Google Gmail outages, and this has brought back to the surface a longstanding concern about the public’s ability (or lack thereof) to communicate effectively with Google itself about problems and issues with Google services.

I’ll note right here that Google usually does provide a high level of customer support for users of their paid services. And I would assert that there’s nothing wrong with Google providing differing support levels to paying customers vs. users of their many free services.

But without a doubt, far and away, the biggest Google-related issue that people bring to me is a perceived inability to effectively communicate with Google when they have problems with free Google services — which people do now depend on in many ways, of course. These problems can range from minor to quite serious, sometimes with significant ongoing impacts, and the usual complaint is that they get no response from submissions to reporting forms or e-mailed concerns.

On numerous occasions, when people bring particular Google problems to my attention, I have passed along (when I deemed it appropriate) some of these specific problems to my own contacts at Google, and they’ve always been dealt with promptly from that point forward. But this procedure can’t help everyone with such Google-related issues, of course.

I have long advocated (both privately to Google and publicly) that Google establish some sort of public Ombudsman (likely a relatively small team) devoted specifically to help interface with the public regarding user problems — a role that requires a skillful combination of technical ability, public relations, and “triage” skills. Most large firms that interact continually with the public have teams of this sort in one form or another, often under the label “Ombudsman” (or sometimes “Office of the President”).

The unofficial response I’ve gotten from Google regarding this concept has been an expression of understanding but a definite concern about how such an effort would scale given Google’s user base.

I would never claim that doing this properly is a trivial task — far from it. But given both the horizontal and vertical scope of Google services, and the extent to which vast numbers of persons now depend on these services in their everyday personal and business lives, I would again urge Google to consider moving forward along these lines.


 – – –

Greetings. In Google’s “Failure to Communicate” vs. User Support, I renewed my long-standing call for an Ombudsman “team” or equivalent communications mechanism for Google.

Subsequent reactions suggest that some readers may not be fully familiar with the Ombudsman concept, at least in the way that I use the term.

An Ombudsman is not the same thing as “customer support” per se. I am not advocating a vast new Google customer service apparatus for users of their free services. Ombudsmen (Ombudswomen? Let’s skip the politically correct linguistics for now …) aren’t who you go to when search results are slow or you can’t log in to Gmail for two hours. These sorts of generally purely technical issues are the vast majority of the time suitable for handling within the normal context of existing online reporting forms and the like. (I inadvertently may have caused some confusion on this point by introducing my previous piece with a mention of Gmail problems — but that was only meant in the sense that those problems triggered broader discussions, not a specific example of an issue appropriate for escalation to an Ombudsman.)

But there’s a whole different class of largely non-technical (or more accurately, mixed-modality) issues where Google users appear to routinely feel frustrated and impotent to deal with what they feel are very disturbing situations.

Many of these relate to perceived defamations, demeaning falsehoods, systemic attacks, and other similar concerns that some persons feel are present in various Google service data (search results, Google Groups postings, Google-hosted blog postings, YouTube, and so on).

By the time some of these people write to me, they’re apparently in tears over the situations, wondering if they should spend their paltry savings on lawyers, and generally very distraught. Their biggest immediate complaints? They don’t know who to contact at Google, or their attempts at contact via online forms and e-mail have yielded nothing but automatic replies (if that).

And herein resides the crux of the matter. I am a very public advocate of open information, and a strong opponent of censorship. I won’t litter this posting with all of the relevant links. I have however expressed concerns about the tendency of false information to reside forever in search results without mechanisms for counterbalancing arguments to be seen. In 2007 I discussed this in Search Engine Dispute Notifications: Request For Comments and subsequent postings. This is an exceedingly complex topic, with no simple solutions.

In general, my experience has been that many or most of the concerns that people bring forth in these regards are, all aspects of the situation considered fairly, not necessarily suitable for the kinds of relief that the persons involved are seeking. That is, the level of harm claimed often seems insufficient, vs. free speech and the associated rights of other parties.

However, there are dramatic, and not terribly infrequent exceptions that appear significantly egregious and serious. And when these folks can’t get a substantive reply from Google (and can’t afford a lawyer to go after the parties who actually have posted or otherwise control the information that Google is indexing or hosting) these aggrieved persons tend to be up you-know-what creek.

If you have a DMCA concern, Google will normally react to it promptly. But when the DMCA is not involved, trying to get a real response from Google about the sorts of potentially serious concerns discussed above — unless you have contacts that most people don’t have — can often seem impossible.

Google generally takes the position — a position that I basically support — that since they don’t create most content, the responsibility for the content is with the actual creator, the hosting Web sites, and so on. But Google makes their living by providing global access to those materials, and cannot be reasonably viewed as being wholly separated from associated impacts and concerns.

At the very least, even if requests for deletions, alterations, or other relief are unconvincing or rejected for any number of quite valid reasons, the persons who bring forth these concerns should not be effectively ignored. They deserve to at least get a substantive response, some sort of hearing, more than a form-letter automated reply about why their particular plea is being rejected. This principle remains true irrespective of the ultimate merits or disposition of the particular case.

And this is where the role of a Google Ombudsman could be so important — not only in terms of appropriately responding to these sorts of cases, but also to help head off the possibility of blowback via draconian regulatory or legislative actions that might cut deeply into Google’s (and their competitors) business models — a nightmare scenario that I for one don’t want to see occur.

But I do fear that unless Google moves assertively toward providing better communications channels with their users for significant issues — beyond form responses and postings in the official Google blogs, there are forces that would just love to see Google seriously damaged who will find ways to leverage these sorts of issues toward that end — evidence of this sort of positioning by some well-heeled Google haters is already visible.

Ombudsmen are all about communication. For any large firm that is constantly dealing with the public, especially one operating on the scope of Google, it’s almost impossible to have too much communication when it comes to important problems and related issues. On the other hand, too little communications, or the sense that concerned persons are being ignored, can be a penny-wise but pound-foolish course with negative consequences that could have been — even if not easily avoided– at least avoided with a degree of serious effort.


The YouTube Racists Fight Back!

Views: 671

Somewhat earlier today I received one of those “Hey Lauren, you gotta look at this on YouTube!” emails. Prior to my recently writing What Google Needs to Do About Hate Speech, such a message was as likely to point at a particularly cute cat video or a lost episode of some 60s television series as anything else. Since that posting, however, these alerts are far more likely to direct me toward much more controversial materials.

Such was the case today. Because the YouTube racists, antisemites, and their various assorted lowlife minions are at war. They’re at war with YouTube, they’re at war with the Wall Street Journal. They’re ranting and raving and chalking up view counts on their YouTube live streams and uploads today that ordinary YouTube users would be thankful to accumulate over a number of years.

After spending some time this afternoon lifting up rotting logs to peer at the maggots infesting the seamy side of YouTube where these folks reside, here’s what’s apparently going on, as best as I can understand it right now.

The sordid gang of misfits and losers who create and support the worst of YouTube content — everybody from vile PewDiePie supporters to hardcore Nazis, are angry. They’re angry that anyone would dare to threaten the YouTube monetization streams that help support their continuing rivers of hate speech. Any moves by Google or outside entities that appear to disrupt their income stream, they characterize as efforts to “destroy the YouTube platform.”

Today’s ongoing tirade appears to have been triggered by claims that the Wall Street Journal “faked” the juxtaposition of specific major brand ads with racist videos, as part of the ongoing controversies regarding YouTube advertiser controls. It seems that the creators of these videos are claiming that the videos in question were not being monetized during the period under discussion, or otherwise couldn’t have appeared in the manner claimed by the WSJ.

This gets into a maze of twisty little passages very quickly, because when you start digging down into these ranting videos today, you quickly see how they are intertwined with gamer subcultures, right-wing “fake news” claims, pro-Trump propagandists, and other dark cults — as if the outright racism and antisemitism weren’t enough.

And this is where the true irony breaks through like a flashing neon sign. These sickos aren’t at all apologetic for their hate speech videos on YouTube, they’re simply upset when Google isn’t helping to fund them.

I’ve been very clear about this. I strongly feel that these videos should not be on YouTube at all, whether monetized or not.

For example, one of the videos being discussed today in this context involves the song “Alabama Nig—.” If you fill in the dashes and search for the result on YouTube, you’ll get many thousands of hits, all of them racist, none of which should be on YouTube in the first place.

Which all suggests that the arguments about major company ads on YouTube hate speech videos, and more broadly the issues of YouTube hate speech monetization, are indeed really just digging around the edges of the problem.

Hate speech has no place on YouTube. Period. Google’s Terms of Service for YouTube explicitly forbid racial, religious, and other forms of this garbage.

The sooner that Google seriously enforces their own YouTube terms, the sooner that we can start cleaning out this hateful rot. We’ve permitted this disease to grow for years on the Internet thanks to our “anything goes” attitude, contributing to a horrific rise in hate throughout our country, reaching all the way to the current occupant of the Oval Office and his cronies.

This must be the beginning of the end for hate speech on Youtube.


My Brief Radio Discussion of the GOP’s Horrendous Internet Privacy Invasion Law

Views: 614

An important issue that I’ve frequently discussed here and in other venues is the manner in which Internet and other media “filter bubbles” tend to cause us to only expose ourselves to information that we already agree with — whether it’s accurate or not.

That’s one reason why I value my continuing frequent invitations to discuss technology and tech policy topics on the extremely popular late night “Coast to Coast AM” national radio show. Talk radio audiences tend to be very conservative, and the willingness  of the show to repeatedly share their air with someone like me (who doesn’t fit the typical talk show mold and who can offer a contrasting point of view) is both notable and praiseworthy.

George Noory is in my opinion the best host on radio — he makes every interview a pleasure for his guests. And while the show has been known primarily over the years for discussions of — shall we say — “speculative” topics, it also has become an important venue for serious scientists and technologists to discuss issues of importance and interest (see: Coast to Coast AM Is No Wack Job).

Near the top of the show last night I chatted with George for a few minutes about the horribly privacy-invasive new GOP legislation that permits ISPs to sell customers’ private information (including web browsing history and much more) without prior consent. This morning I’ve been receiving requests for copies of that interview, so (with the permission of the show for posting short excerpts) it’s provided below.

Here is an audio clip of the interview for download. It’s under four minutes long. Or you can play it here:

As I told George, I’m angry about this incredibly privacy-invasive legislation . If you are too, I urge you to inform the GOP politicos who pushed this nightmare law — to borrow a phrase from the 1976 film “Network” — that you’re mad as hell and you’re not going to take this anymore!


Google+ and the Notifications Meltdown

Views: 1292

I’ve been getting emails recently from correspondents complaining that I have not responded to their comments/postings on Google+. I’ve just figured out why.

The new (Google unified) Google+ desktop notification panel is losing G+ notifications left and right. For a while I thought that all of the extra notifications I was seeing when I checked on mobile occasionally were dupes — but it turns out that most of them are notifications that were never presented to me on desktop, in vast numbers.

Right now I can find (on the essentially unusable G+ desktop standalone notifications page, which requires manually clicking to a new page for each post!) about 30 recent G+ notifications that were never presented to me in the desktop notification panel. I’m not even sure how to deal with them now in a practical manner.

This is unacceptable — you have one job to do, notifications panel, and that’s to accurately show me my damned notifications!

Also, a high percentage of the time when I click on actions in the new desktop notification panel pop-up boxes (e.g. to reply), the panel blows away and I’m thrown to a new G+ page tab.

Does anyone at G bother to stress test this stuff any more in the context of users with many followers (I have nearly 400K) who get lots of notifications? Apparently not.

Another new Google user interface triumph of form over function!


How YouTube’s User Interface Helps Perpetuate Hate Speech

Views: 1415

UPDATE (6 May 2017): The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

Computer User Interface (UI) design is both an art and a science, and can have effects on users that go far beyond the interfaces themselves. As I’ve discussed previously, e.g. in The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes, user interfaces can unintentionally act as a form of discrimination against older users or other users with special needs.

But another user interface question arises in conjunction with the current debate about hate speech on Google’s YouTube (for background, please see What Google Needs to Do About YouTube Hate Speech and How Google’s YouTube Spreads Hate).

Specifically, can user interface design unintentionally help to spread and perpetuate hate speech? The answer may be an extremely disconcerting affirmative.

A key reason why I suspect that this is indeed the case, is the large numbers of YouTube users who have told me that they didn’t even realize that they had the ability to report hate speech to YouTube/Google. And when I’ve suggested that they do so, they often reply that they don’t see any obvious way to make such a report.

Over the years it’s become more and more popular to “hide” various UI elements in menus and/or behind increasingly obscure symbols and icons. And one key problem with this approach is obvious when you think about it: If a user doesn’t even know that an option exists, can we really expect them to play “UI scavenger hunt” in an attempt to find such an option? Even more to the point, what if it’s an option that you really need to see in order to even realize that the possibility exists — for example, of reporting a YouTube hate speech video or channel?

While YouTube suffers from this problem today, that wasn’t always the case. Here’s an old YouTube “watch page” desktop UI from years ago:

An Old YouTube User Interface

Not only is there a flag icon present on the main interface (rather than having the option buried in a “More” menu (and/or under generic vertical dots or horizontal lines), but the word “Flag” is even present on the main interface to serve as a direct signal to users that flagging videos is indeed an available option!

On the current YouTube desktop UI, you have to know to go digging under a “More” menu to find a similar “Report” option. And if you didn’t know that a Report option even existed, why would you necessarily go searching around for it in the first place? The only other YouTube page location where a user might consider reporting a hate speech video is through the small generic “Feedback” link at the very bottom of the watch page — and that can be way, way down there if the video has a lot of comments.

To be effective against hate speech, a flagging/reporting option needs to be present in an obvious location on the main UI, where users will see it and know that it exists. If it’s buried or hidden in any manner, vast numbers of users won’t even realize that they have the power to report hate speech videos to Google at all (the disappointing degree to which Google actually enforces their hate speech prohibitions in their Terms of Service, I’ve discussed in the posts linked earlier in this text).

You don’t need to be a UI expert to suspect one reason why Google over time has de-emphasized obvious flag/report links on the main interface, instead relegating them to a generic “More” menu. The easier the option is to see, the more people will tend to use it, both appropriately and inappropriately — and really dealing with those abuse reports in a serious manner can be expensive in terms of code and employees.

But that’s no longer an acceptable excuse — if it ever was. Google is losing major advertisers in droves, who are no longer willing to have their ads appear next to hate speech videos that shouldn’t even be monetized, and in many cases shouldn’t even be available on YouTube at all under the existing YouTube/Google Terms of Service.

For the sake of its users and of the company itself, Google must get a handle on this situation as quickly as possible. Making sure that users are actually encouraged to report hate speech and other inappropriate videos, and that Google treats those reports appropriately and with a no-nonsense application of their own Terms of Service, are absolutely paramount.