Google’s Achilles’ Heel

A day rarely passes when somebody doesn’t send me a note asking about some Google-related issue. These are usually very specific cases — people requesting help for some particular Google product or often about account-related issues. Sometimes I can offer advice or other assistance, sometimes I can’t. Occasionally in the process I get pulled into deeper philosophical discussions regarding Google.

That’s what happened a few days ago when I was asked the straightforward question: “What is Google’s biggest problem?”

My correspondent apparently was expecting me to reply with a comment about some class of technical issues, or perhaps something about a security or privacy matter. So he was quite surprised when I immediately suggested that Google’s biggest problem has nothing per se to do with any of those areas at all.

Google’s technology is superb. Their privacy and security regimes are first-rate and world class. The teams that keep all those systems going are excellent, and I’ve never met a Googler that I didn’t like (well … hardly ever). It’s widely known that I take issue with various aspects of Google’s user support structure and user interface designs, but these are subject to improvement in relatively straightforward ways.

No, Google’s biggest problem isn’t in any of these areas.

Ironically, while Google has grown and improved in so many ways since its founding some 18 years ago, the big problem today remains essentially the same as it did at the beginning.

To use the vernacular, Google’s public relations — their external communications — can seriously suck.

That is not to suggest that the individuals working Google PR aren’t great people. The problem with Google PR is — in my opinion — a structural, cultural dilemma, of the sort that can be extremely difficult for any firm to significantly alter.

This is a dangerous state of affairs, both for Google and its users. Effective external communications ultimately impact virtually every aspect of how individuals, politicians, and governments view Google services and Google itself more broadly. In an increasingly toxic political environment around the world, Google’s institutional tendency —  toward minimal communications in so many contexts — creates an ideal growth medium for Google adversaries and haters to fill the perceived information vacuum with conspiracy theories and false propaganda.

For example, I recently posted Quick Tutorial: Deleting Your Data Using Google’s “My Activity” — which ended up appearing in a variety of high readership venues. Immediately I started seeing comments and receiving emails questioning how I could possibly know that Google was telling the truth about data actually being deleted, in many cases accompanied by a long tirade of imagined grievances against Google. “How can you trust Google?” they ask.

As it happens I do trust Google, and thanks to my period of consulting to them several years ago, I know how these procedures actually operate and I know that Google is being accurate and truthful. But beyond that general statement all I can say is “Trust me on this!”

And therein lies the heart of the dilemma. Only Google can speak for Google, and Google’s public preference for generalities and vagueness on many policy and technical matters is all too often much deeper than necessary prudence and concerns about “Streisand Effect” blowbacks would reasonably dictate.

Google’s external communications problem is indeed their “Achilles’ Heel” — a crucial quandary that if left unchanged will increasingly create the opportunity for damage to Google and its users, particularly at this time when misinformation, government censorship, and other political firestorms are burning widening paths around the globe.

Institutionally entrenched communications patterns cannot reasonably be changed overnight, and a great deal of business information is both fully appropriate and necessary to keep confidential.

But in the case of Google, even a bit more transparency in external communications could do wonders, by permitting the outside world to better understand and appreciate the hard work and diligence that makes Google so worthy of trust — and by leaving the Google haters and their lying propaganda in the dust.

–Lauren–

The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

A little over a month ago, in How YouTube’s User Interface Helps Perpetuate Hate Speech, I discussed the highly problematic aspects of Google’s YouTube “hiding” the increasingly important “report a video” option (“Report”) under the main desktop UI (user interface) “More” menu.

I have just noticed that the Report option is now back on the main YouTube watch page top-level interface where I recommended that it should appropriately be. However, it’s not back in that key position in all situations.

In fact, as far as I can determine right now, it is — at least across the universe of my various YouTube accounts — only present on my account that currently is enabled for the YouTube Red subscription service (a great value that I do highly recommend).

At the moment, on all of my other accounts — and when not logged in at all (e.g., via an Incognito tab on Chrome) — the Report option is still hidden under the More menu.

Since Google is known to run various user interface experiments that may not impact all users, and since I don’t have a definitive history of the UI for my various YouTube accounts (and I tend to frequently switch between accounts for various situations), pinning down exactly what’s happening here — and when it happened — is not trivial with my existing data.

So I’m asking for your input. If you have a YouTube Red account and do NOT see the Report option on the top-level main YouTube watch page interface (see image below) when using that account, please let me know. Similarly, if you have a NON-Red account — that is, an ordinary non-subscription YouTube account (or you’re not logged in) — and DO see the Report option on the top-level watch page interface, I’d appreciate knowing about that also. 

In either of these cases, please email a note to yt@vortex.com telling me which of these two situations applies to you, and from what country you are accessing YouTube. Your responses will be confidential.

If Google is moving toward placing the Report option more generally on the top-level interface — especially if this will also ultimately apply to non-YouTube Red subscribers, that would be very welcome news indeed.

Thanks for your help with this.

–Lauren–

YouTube’s Dangerous and Sickening Cesspool of “Prank” and “Dare” Videos

Before we delve into a particularly sordid layer of YouTube and its implications to individuals, society at large, and Google itself, I’ll make my standard confession. Overall, I’m an enormous fan of YouTube. I consider it to be one of the wonders of the 21st century, a seemingly limitless wellspring of entertainment, education, nostalgia, and all manner of other positive traits that I would massively miss if YouTube were to vanish from the face of the Earth. I know quite a few of the folks who keep YouTube running at Google, and they’re all great people.

That said, we’re increasingly finding ourselves faced with the uncomfortable reality that Google has seemingly dragged its collective feet when it comes to making sure that their own YouTube Terms of Service are equitably and appropriately enforced.

I’ve talked about an array of aspects relating to this problem over the years — including Content ID and copyright issues; YouTube channel suspensions, closures, and appeal procedures; and a long additional list that I won’t get into here again right now, other than to note that at Google/YouTube scale, none of this stuff is trivial to deal with properly, to say the least.

Recently the spotlight has been on YouTube’s hate speech problems, which I’ve discussed in What Google Needs to Do About YouTube Hate Speech and in a variety of other posts. This issue in particular has been in the news relating to the 2016 election, and due to a boycott of YouTube by advertisers concerned about their ads appearing alongside vile hate speech videos that (by any reasonable interpretation of the YouTube Terms of Service) shouldn’t be permitted on the platform in the first place.

But now I’m going to lift up another damp rock at YouTube and shine some light underneath — and it’s not pretty under there, either.

The issue in focus today is YouTube’s vast cornucopia of so-called “prank” – “dare” – “challenge” (PDC) videos, which range from completely innocuous and in good fun, to an enormous array of videos portraying vile, dangerous, harmful, and often illegal activities.

You may never have experienced this particular YouTube subculture. YouTube’s generally excellent recommendation engine tends to display new videos that are similar to the videos that you’ve already viewed, so unless you’ve looked for them, you could be completely forgiven for not even realizing that the entire PDC YouTube world even existed. But once you find them, YouTube will make sure that you’re offered a bountiful supply of new ones on a continuing basis.

This category of YouTube videos was flung into the mainstream news over the last few days, with a pair of egregious (but by no means isolated) examples.

In one case, a couple lost custody of young children due to an extensive series of horrific, abusive, “prank” videos targeting those children — that they’ve been publishing on YouTube over a long period. They’re now arguing that the abuse was “faked” — that the children agreed to do the videos, and so on.

But those claims don’t change the outcome of the equation — not in the least. First, young children can’t give meaningful, independent consent in such situations.

And here’s a key point that applies across the entire continuum of these YouTube videos — it usually doesn’t matter whether an abusive prank is faked or not. The negative impact on viewers is the same either way. Even if there is a claim that a vile “prank” was faked, how are viewers to independently judge the veracity of such a statement in many cases?

An obvious example category includes the YouTube “shock collar” prank/challenge videos. What, you didn’t know about those? Just do a YouTube search for:

shock collar

and be amazed. These are at the relative low end of the spectrum — you’re not terribly likely to be seriously injured by a shock collar, but there are indeed some nightmarish exceptions to that generalization.

So in this specific category you’ll find every imaginable combination of people “pranking” each other, challenging each other, and otherwise behaving like stupid morons with electricity in contact with their bodies.

Are all of these videos legit? Who the hell knows? I’d wager that some are faked but that most are real — but again as I noted above, whether or not such videos are faked or not isn’t the real issue. Potential copycats trying to outdo them won’t know or care.

Even if we consider the shock collar videos to be on the lower end of the relative scale under discussion, it quickly becomes obvious why such videos escalate into truly horrendous activities. Many of these YouTube channel operators openly compete with each other (or at least, claim to be competing — they could be splitting their combined monetization revenue between themselves for all we can tell from the outside) in an ever accelerating race to the bottom, with ever more vile and dangerous stunts.

While one can argue that we’re often just looking at stupid people voluntarily doing stupid things to each other, many of these videos still clearly violate Google’s Terms of Service, and it appears, anecdotally at least, that the larger your subscriber count the less likely that your videos will be subjected to a rigorous interpretation of those terms.

And then we have another example that’s currently in the news — the YouTube channel operator who thought it would be a funny “prank” to remove stop signs from intersections, and then record the cars speeding through. Not much more needs to be said about this, other than the fact that he was ultimately arrested and felony charged. Now he’s using his YouTube channel to try drum up funds for his lawyers.

One might consider the possibility that since he was arrested, that video might serve as an example of what others shouldn’t do. But a survey of “arrested at the end of doing something illegal” videos and their aftermaths suggests that the opposite result usually occurs — other YouTube channel operators are instead inspired to try replicate (or better yet from their standpoints, exceed) those illegal acts — without getting caught (“Ha ha! You got arrested, but we didn’t!”).

As in the case of YouTube hate speech, the key here is for Google to seriously and equitably apply their own Terms of Service, admittedly a tough (but doable!) job at the massive scale that Google and YouTube operate.

To not act proactively and effectively in this area is too terrible to risk. Non-USA governments are already moving to impose potentially draconian restrictions and penalties relating to YouTube videos. Even inside the USA, government crackdowns are possible since First Amendment protections are not absolute, especially if the existing Terms of Service are seen to be largely paper tigers.

These problems are by no means isolated only to YouTube/Google. But they’ve been festering below the surface at YouTube for years, and the public attention that they’re now receiving means that the status quo is no longer tenable.

Especially for the sake of the YouTube that I really do love so much, I fervently hope that Google starts addressing these matters with more urgency and effectiveness, rather than waiting for governments to begin disastrously dictating the rules.

–Lauren–