Netflix Blocking, Google, Android, and Donald Trump

Netflix has now confirmed that they have begun blocking Android phones that have been rooted and/or even have unlocked bootloaders from downloading the Netflix app from the Google Play Store. While the app can still be sideloaded and still runs, we can reasonably assume that this is a temporary reprieve in those respects.

Let’s be crystal clear about what’s happening here. Google is moving their Android security framework in directions that will encourage popular app creators to broadly refuse installation on rooted/bootloader-unlocked phones.

This will inevitably put all users at greater risk by making it impossible in a practical sense for most concerned users to modify their phones for protection against malware, spyware, and government intrusions.

Despite the valiant efforts of Google toward making the Android environment a safe one, we are living in a time where a sociopathic fascist controls the federal government. We cannot tolerate total control of our phones being in the hands of any individual firms, even benign ones like Google.

I’ll have more to say about this. Much more.

–Lauren–

WARNING: Antivirus sites may be helping to SPREAD the current global malware ransomware (WannaCry) attack!

It has been reported that a researcher discovered that spread of the current worldwide ransomware attack can be halted after he registered the domain:

iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com

and built a sinkhole website that the malware could check. Reportedly the malware does not continue spreading if it can reach this site. HOWEVER, various antivirus websites/services are now reportedly adding that domain to their “bad domain” lists! If sites infected with this malware are unable to reach that domain due to their firewalls incorporating rules from antivirus sites that include a block for that domain, the malware will likely continue spreading across their vulnerable computers (which must also still be patched to avoid infection by similar exploits). Your systems MUST be able to access the domain above if this malware blocking trigger is to be effective, according to the current reports that I’m receiving!

–Lauren–

Announcing the “Google Issues” Mailing List

UPDATE (12 May 2017): Readers have been asking me about this new list’s scope. To be clear, it is not an “announcement-only” list. Reader participation is very much encouraged, including Google-related questions. Thanks again!

– – –

Nobody can accuse me of starting too many Internet mailing lists. My existing lists (PRIVACY Forum, PFIR, and NNSquad) have been running continuously on the order of 26, 19, and 11 years respectively. Remarkably, I routinely get notes from subscribers who have been on these lists since their creation and claim to have been reading all of my associated messages — apparently without suffering any obvious brain damage to date.

Even relatively new readers will know by now that postings relating to Google have long been a very frequent component of these lists, and of my blog (which itself is around 14 years old).

The volume of Google-related postings seems likely to only be increasing. So with hopefully only relatively minor risk to the spacetime continuum, I have created a new mailing list to deal exclusively with all manner of Google-centric issues (and associated Alphabet, Inc. topics as well).

The subscription page (and archive information) for this new moderated mailing list is at:

https://vortex.com/google-issues

While a variety of postings specific to Google will continue to appear in my other mailing lists as well, this new list is my intended venue for additional wide-ranging discussions and other postings related to Google and Alphabet, that I believe will be of ongoing interest — much of which will not appear in my other lists.

Google of course has no role in the operation of my lists or blog, and while I have consulted to them in the past I am not currently doing so — all of my opinions expressed in my lists and other venues are mine alone.

I’m looking forward to seeing you over on the Google Issues mailing list!

Thanks very much.

–Lauren–

Google’s Achilles’ Heel

A day rarely passes when somebody doesn’t send me a note asking about some Google-related issue. These are usually very specific cases — people requesting help for some particular Google product or often about account-related issues. Sometimes I can offer advice or other assistance, sometimes I can’t. Occasionally in the process I get pulled into deeper philosophical discussions regarding Google.

That’s what happened a few days ago when I was asked the straightforward question: “What is Google’s biggest problem?”

My correspondent apparently was expecting me to reply with a comment about some class of technical issues, or perhaps something about a security or privacy matter. So he was quite surprised when I immediately suggested that Google’s biggest problem has nothing per se to do with any of those areas at all.

Google’s technology is superb. Their privacy and security regimes are first-rate and world class. The teams that keep all those systems going are excellent, and I’ve never met a Googler that I didn’t like (well … hardly ever). It’s widely known that I take issue with various aspects of Google’s user support structure and user interface designs, but these are subject to improvement in relatively straightforward ways.

No, Google’s biggest problem isn’t in any of these areas.

Ironically, while Google has grown and improved in so many ways since its founding some 18 years ago, the big problem today remains essentially the same as it did at the beginning.

To use the vernacular, Google’s public relations — their external communications — can seriously suck.

That is not to suggest that the individuals working Google PR aren’t great people. The problem with Google PR is — in my opinion — a structural, cultural dilemma, of the sort that can be extremely difficult for any firm to significantly alter.

This is a dangerous state of affairs, both for Google and its users. Effective external communications ultimately impact virtually every aspect of how individuals, politicians, and governments view Google services and Google itself more broadly. In an increasingly toxic political environment around the world, Google’s institutional tendency —  toward minimal communications in so many contexts — creates an ideal growth medium for Google adversaries and haters to fill the perceived information vacuum with conspiracy theories and false propaganda.

For example, I recently posted Quick Tutorial: Deleting Your Data Using Google’s “My Activity” — which ended up appearing in a variety of high readership venues. Immediately I started seeing comments and receiving emails questioning how I could possibly know that Google was telling the truth about data actually being deleted, in many cases accompanied by a long tirade of imagined grievances against Google. “How can you trust Google?” they ask.

As it happens I do trust Google, and thanks to my period of consulting to them several years ago, I know how these procedures actually operate and I know that Google is being accurate and truthful. But beyond that general statement all I can say is “Trust me on this!”

And therein lies the heart of the dilemma. Only Google can speak for Google, and Google’s public preference for generalities and vagueness on many policy and technical matters is all too often much deeper than necessary prudence and concerns about “Streisand Effect” blowbacks would reasonably dictate.

Google’s external communications problem is indeed their “Achilles’ Heel” — a crucial quandary that if left unchanged will increasingly create the opportunity for damage to Google and its users, particularly at this time when misinformation, government censorship, and other political firestorms are burning widening paths around the globe.

Institutionally entrenched communications patterns cannot reasonably be changed overnight, and a great deal of business information is both fully appropriate and necessary to keep confidential.

But in the case of Google, even a bit more transparency in external communications could do wonders, by permitting the outside world to better understand and appreciate the hard work and diligence that makes Google so worthy of trust — and by leaving the Google haters and their lying propaganda in the dust.

–Lauren–

YouTube’s Dangerous and Sickening Cesspool of “Prank” and “Dare” Videos

UPDATE (December 17, 2017): A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly

– – –

Before we delve into a particularly sordid layer of YouTube and its implications to individuals, society at large, and Google itself, I’ll make my standard confession. Overall, I’m an enormous fan of YouTube. I consider it to be one of the wonders of the 21st century, a seemingly limitless wellspring of entertainment, education, nostalgia, and all manner of other positive traits that I would massively miss if YouTube were to vanish from the face of the Earth. I know quite a few of the folks who keep YouTube running at Google, and they’re all great people.

That said, we’re increasingly finding ourselves faced with the uncomfortable reality that Google has seemingly dragged its collective feet when it comes to making sure that their own YouTube Terms of Service are equitably and appropriately enforced.

I’ve talked about an array of aspects relating to this problem over the years — including Content ID and copyright issues; YouTube channel suspensions, closures, and appeal procedures; and a long additional list that I won’t get into here again right now, other than to note that at Google/YouTube scale, none of this stuff is trivial to deal with properly, to say the least.

Recently the spotlight has been on YouTube’s hate speech problems, which I’ve discussed in What Google Needs to Do About YouTube Hate Speech and in a variety of other posts. This issue in particular has been in the news relating to the 2016 election, and due to a boycott of YouTube by advertisers concerned about their ads appearing alongside vile hate speech videos that (by any reasonable interpretation of the YouTube Terms of Service) shouldn’t be permitted on the platform in the first place.

But now I’m going to lift up another damp rock at YouTube and shine some light underneath — and it’s not pretty under there, either.

The issue in focus today is YouTube’s vast cornucopia of so-called “prank” – “dare” – “challenge” (PDC) videos, which range from completely innocuous and in good fun, to an enormous array of videos portraying vile, dangerous, harmful, and often illegal activities.

You may never have experienced this particular YouTube subculture. YouTube’s generally excellent recommendation engine tends to display new videos that are similar to the videos that you’ve already viewed, so unless you’ve looked for them, you could be completely forgiven for not even realizing that the entire PDC YouTube world even existed. But once you find them, YouTube will make sure that you’re offered a bountiful supply of new ones on a continuing basis.

This category of YouTube videos was flung into the mainstream news over the last few days, with a pair of egregious (but by no means isolated) examples.

In one case, a couple lost custody of young children due to an extensive series of horrific, abusive, “prank” videos targeting those children — that they’ve been publishing on YouTube over a long period. They’re now arguing that the abuse was “faked” — that the children agreed to do the videos, and so on.

But those claims don’t change the outcome of the equation — not in the least. First, young children can’t give meaningful, independent consent in such situations.

And here’s a key point that applies across the entire continuum of these YouTube videos — it usually doesn’t matter whether an abusive prank is faked or not. The negative impact on viewers is the same either way. Even if there is a claim that a vile “prank” was faked, how are viewers to independently judge the veracity of such a statement in many cases?

An obvious example category includes the YouTube “shock collar” prank/challenge videos. What, you didn’t know about those? Just do a YouTube search for:

shock collar

and be amazed. These are at the relative low end of the spectrum — you’re not terribly likely to be seriously injured by a shock collar, but there are indeed some nightmarish exceptions to that generalization.

So in this specific category you’ll find every imaginable combination of people “pranking” each other, challenging each other, and otherwise behaving like stupid morons with electricity in contact with their bodies.

Are all of these videos legit? Who the hell knows? I’d wager that some are faked but that most are real — but again as I noted above, whether or not such videos are faked or not isn’t the real issue. Potential copycats trying to outdo them won’t know or care.

Even if we consider the shock collar videos to be on the lower end of the relative scale under discussion, it quickly becomes obvious why such videos escalate into truly horrendous activities. Many of these YouTube channel operators openly compete with each other (or at least, claim to be competing — they could be splitting their combined monetization revenue between themselves for all we can tell from the outside) in an ever accelerating race to the bottom, with ever more vile and dangerous stunts.

While one can argue that we’re often just looking at stupid people voluntarily doing stupid things to each other, many of these videos still clearly violate Google’s Terms of Service, and it appears, anecdotally at least, that the larger your subscriber count the less likely that your videos will be subjected to a rigorous interpretation of those terms.

And then we have another example that’s currently in the news — the YouTube channel operator who thought it would be a funny “prank” to remove stop signs from intersections, and then record the cars speeding through. Not much more needs to be said about this, other than the fact that he was ultimately arrested and felony charged. Now he’s using his YouTube channel to try drum up funds for his lawyers.

One might consider the possibility that since he was arrested, that video might serve as an example of what others shouldn’t do. But a survey of “arrested at the end of doing something illegal” videos and their aftermaths suggests that the opposite result usually occurs — other YouTube channel operators are instead inspired to try replicate (or better yet from their standpoints, exceed) those illegal acts — without getting caught (“Ha ha! You got arrested, but we didn’t!”).

As in the case of YouTube hate speech, the key here is for Google to seriously and equitably apply their own Terms of Service, admittedly a tough (but doable!) job at the massive scale that Google and YouTube operate.

To not act proactively and effectively in this area is too terrible to risk. Non-USA governments are already moving to impose potentially draconian restrictions and penalties relating to YouTube videos. Even inside the USA, government crackdowns are possible since First Amendment protections are not absolute, especially if the existing Terms of Service are seen to be largely paper tigers.

These problems are by no means isolated only to YouTube/Google. But they’ve been festering below the surface at YouTube for years, and the public attention that they’re now receiving means that the status quo is no longer tenable.

Especially for the sake of the YouTube that I really do love so much, I fervently hope that Google starts addressing these matters with more urgency and effectiveness, rather than waiting for governments to begin disastrously dictating the rules.

–Lauren–

Quick Tutorial: Deleting Your Data Using Google’s “My Activity”

UPDATE (May 1, 2019): A Major New Privacy-Positive Move by Google

– – –

Since posting The Google Page That Google Haters Don’t Want You to Know About last week, I’ve received a bunch of messages from readers asking for help using Google’s “My Activity” page to control, inspect, and/or delete their data on Google.

The My Activity portal is quite comprehensive and can be used in many different ways, but to get you started I’ll briefly outline how to use My Activity to delete activity data.

Some words of warning, however. You cannot revoke deletions once they’ve been made, and deleting your data on Google can negatively affect how well those services will perform for you going forward, since you’ll be moving back toward “generic” interactions, rather than customized ones. So do think carefully before performing broad deletions! (I know that I’d be lost without my YouTube watch history, for example …)

OK, let’s get started.

First, go to:

https://google.com/myactivity

If you’re not logged into a Google account, do so now. Once you’re logged in, you can use the standard account switcher (clicking on the picture or letter at the page upper right) to change accounts.

What you should now see is a reverse chronological list of your activity when logged into that Google account, which you can scroll down starting with Today and working backwards.

If you click on the three vertical dots (henceforth, “the dots”) at a Google service type entry (e.g. Search), you can choose to expand the detailed entries for that level for that date, or delete those entries. If you click on the dots on the Today bar itself, you can choose to delete ALL of the activity entries for Today.

The real power of the My Activity interface comes into play when you click in the upper Search/Filter “by date & product” area.

After you’ve done this, you can activate the search bar by clicking in the bar and typing something, or (my personal preference) by unchecking “All products” further down.

Now you can search for activity entries filtered by product type and/or date as you’ve specified. To avoid over-deletion, I strongly recommend not selecting many product types at the same time! (When you click on specific products, the “All products” entry will automatically be unchecked.)

Once you’ve selected at least one specific product type, the Search bar will activate and the “perform this search” magnifying glass icon at the right of the bar will turn dark blue.

You can type queries into the bar to find specific entries, or you can just click on the magnifying glass without a query to list all entries for the selected Google products. If you haven’t changed the “Filter by date” settings at the top to narrow down the dates, the search will cover the default “All time” activity list for those products.

The scrollable page that results from such queries is similar in structure to what you saw for the earlier page that started with Today, and you can interact with it to get more details or delete entries in the same ways.

But look again at the top Search bar now. If you click on the dots at the right of that white Search bar that appears after a query, you’ll see that you now have a “Delete results” option.

You wanted power over your data on Google? Well, you’ve got it. Because if you click “Delete results” it will remove EVERY activity result from that query (or from an empty query entry that lists all results for the specified products). That can mean deleting every activity result from all of the selected products, going back to the relative dawn of time.

My Activity gives you extraordinary power over what sorts of activity data will be collected for your Google accounts, and as we’ve seen the ability to delete data using a variety of parameters and searches. It’s quite a technological work of art.

But again, be careful before invoking these powers. Remember, you can’t undo My Activity deletions. Or to use an old film analogy that many of you might recognize, if you’re going to use powerful incantations like “Klaatu barada nikto” — make damned sure that you pronounce them correctly!

–Lauren–

More Regarding a Terrible Decision by the Internet Archive

Yesterday, in A Terrible Decision by the Internet Archive May Lead to Widespread Blocking, I discussed in detail why the Internet Archive’s decision to ignore Robots Exclusion Standard (RES) directives (in robots.txt files on websites) is terrible for the Internet community and users. I had expected a deluge of hate email in response. But I’ve received no negative reactions at all — rather a range of useful questions and comments — perhaps emphasizing the fact that the importance of the RES is widely recognized.

As I did yesterday, I’ll emphasize again here that the Archive has done a lot of good over many years, that it’s been an extremely valuable resource in more ways than I have time to list right now. Nor am I asserting that the Archive itself has evil motives for its decision. However, I strongly feel that their decision allies them with the dark players of the Net, and gives such scumbags comfort and encouragement.

One polite public message that I received was apparently authored by Internet Archive founder Brewster Kahle (since the message came in via my blog, I have not been able to immediately authenticate it, but the IP address seemed reasonable). He noted that the Archive accepts requests via email to have pages excluded.

This is of course useful, but entirely inadequate.

Most obviously, this technique fails miserably at scale. The whole point of the RES is to provide a publicly inspectable, unified and comprehensively defined method to inform other sites (individually, en masse, or in various combinations) of your site access determinations.

The “send an email note to this address” technique just can’t fly at Internet scale, even if users assume that those emails will ever actually be viewed at any given site. (Remember when “postmaster@” addresses would reliably reach human beings? Yeah, a long, long time ago.)

There’s also been some fascinating discussion regarding the existing legal status of the RES. While it apparently hasn’t been specifically tested in a legal sense here in the USA at least, judges have still been recognizing the importance of RES in various court decisions.

In 2006, Google was sued (“Field vs. Google” — Nevada) for copyright infringement for spidering and caching a website. The court found for Google, noting that the site included a robots.txt file that permitted such access by Google.

The case of Century 21 vs. Zoocasa (2011 — British Columbia) is also illuminating. In this case, the judge found against Zoocasa, noting that they had disregarded robots.txt directives that prohibited their copying content from the Century 21 site.

So it appears that even today, ignoring RES robots.txt files could mean skating on very thin ice from a legal standpoint.

The best course all around would be for the Internet Archive to reverse their decision, and pledge to honor RES directives, as honorable players in the Internet ecosystem are expected to do. It would be a painful shame if the wonderful legacy of the Internet Archive were to be so seriously tarnished going forward by a single (but very serious) bad judgment call.

–Lauren–

A Terrible Decision by the Internet Archive May Lead to Widespread Blocking

UPDATE (23 April 2017):  More Regarding a Terrible Decision by the Internet Archive

– – –

We can stipulate at the outset that the venerable Internet Archive and its associated systems like Wayback Machine have done a lot of good for many years — for example by providing chronological archives of websites who have chosen to participate in their efforts. But now, it appears that the Internet Archive has joined the dark side of the Internet, by announcing that they will no longer honor the access control requests of any websites.

For any given site, the decision to participate or not with the web scanning systems at the Internet Archive (or associated with any other “spidering” system) is indicated by use of the well established and very broadly affirmed “Robots Exclusion Standard” (RES) — a methodology that uses files named “robots.txt” to inform visiting scanning systems which parts of a given website should or should not be subject to spidering and/or archiving by automated scanners.

RES operates on the honor system. It requests that spidering systems follow its directives, which may be simple or detailed, depending on the situation — with those detailed directives defined comprehensively in the standard itself.

While RES generally has no force of law, it has enormous legal implications. The existence of RES — that is, a recognized means for public sites to indicate access preferences — has been important for many years to help hold off efforts in various quarters to charge search engines and/or other classes of users for access that is free to everyone else. The straightforward argument that sites already have a way — via the RES — to indicate their access preferences has held a lot of rabid lawyers at bay.

And there are lots of completely legitimate reasons for sites to use RES to control spidering access, especially for (but by no means restricted to) sites with limited resources. These include technical issues (such as load considerations relating to resource-intensive databases and a range of other related situations), legal issues such as court orders, and a long list of other technical and policy concerns that most of us rarely think about, but that can be of existential importance to many sites.

Since adherence to the RES has usually been considered to be voluntary, an argument can be made (and we can pretty safely assume that the Archive’s reasoning falls into this category one way or another) that since “bad” players might choose to ignore the standard, this puts “good” players who abide by the standard at a disadvantage.

But this is a traditional, bogus argument that we hear whenever previously ethical entities feel the urge to start behaving unethically: “Hell, if the bad guys are breaking the law with impunity, why can’t we as well? After all, our motives are much better than theirs!”

Therein are the storied paths of “good intentions” that lead to hell, when the floodgates of such twisted illogic open wide, as a flood of other players decide that they must emulate the Internet Archive’s dismal reasoning to remain competitive.

There’s much more.

While RES is typically viewed as not having legal force today, that could be changed, perhaps with relative ease in many circumstances. There are no obvious First Amendment considerations in play, so it would seem quite feasible to roll “Adherence to properly published RES directives” into existing cybercrime-related site access authorization definitions.

Nor are individual sites entirely helpless against the Internet Archive’s apparent embracing of the dark side in this regard.

Unless the Archive intends to try go completely into a “ghost” mode, their spidering agents will still be detectable at the http/https protocol levels, and could be blocked (most easily in their entirety) with relatively simple web server configuration directives. If the Archive attempted to cloak their agent names, individual sites could block the Archive by referencing the Archive’s known source IP addresses instead.

It doesn’t take a lot of imagination to see how all of this could quickly turn into an escalating nightmare of “Whac-A-Mole” and expanding blocks, many of which would likely negatively impact unrelated sites as collateral damage.

Even before the Internet Archive’s decision, this class of access and archiving issues had been smoldering for quite some time. Perhaps the Internet Archive’s pouring of rocket fuel onto those embers may ultimately lead to a legally enforced Robots Exclusion Standard — with both the positive and negative ramifications that would then be involved. There are likely to be other associated legal battles as well.

But in the shorter term at least, the Internet Archive’s decision is likely to leave a lot of innocent sites and innocent users quite badly burned.

–Lauren–

The Google Page That Google Haters Don’t Want You to Know About

UPDATE (May 1, 2019): A Major New Privacy-Positive Move by Google

UPDATE (April 24, 2017):  Quick Tutorial: Deleting Your Data Using Google’s “My Activity”

– – –

There’s a page at Google that dedicated Google Haters don’t like to talk about. In fact, they’d prefer that you didn’t even know that it exists, because it seriously undermines the foundation of their hateful anti-Google fantasies.

A core principle of Google hatred is the set of false memes concerning Google and user data collection. This is frequently encapsulated in a fanciful “You are the product!” slogan, despite the fact that (unlike the dominant ISPs and many other large firms) Google never sells user data to third parties.

But the haters hate the idea that data is collected at all, despite the fact that such data is crucial for Google services to function at the quality levels that we have come to expect from Google.

I was thinking about this again today when I started hearing from users reacting to Google’s announcement of multiple user support for Google Home, who were expressing concerns about collection of more individualized voice data (without which — I would note — you couldn’t differentiate between different users).

We can stipulate that Google collects a lot of data to make all of this stuff work. But here’s the kicker that the haters don’t want you to think about — Google also gives you enormous control over that data, to a staggering degree that most Google users don’t fully realize.

The Golden Ticket gateway to this goodness is at:

google.com/myactivity

There’s a lot to explore there — be sure to click on both the three vertical dots near the upper top and on the three horizontal bars near the upper left to see the full range of options available.

This page is a portal to an incredible resource. Not only does it give you the opportunity to see in detail the data that Google has associated with you across the universe of Google products, but also the ability to delete that data (selectively or in its totality), and to determine how much of your data will be collected going forward for the various Google services.

On top of that, there are links over to other data related systems that you can control, such as Takeout for downloading your data from Google, comprehensive ad preferences settings (which you can use to adjust or even fully disable ad personalization), and an array of other goodies, all supported by excellent help pages — a lot of thought and work went into this.

I’m a pragmatist by nature. I worry about organizations that don’t give us control over the data they collect about us — like the government, like those giant ISPs and lots of other firms. And typically, these kinds of entities collect this data even though they don’t actually need it to provide the kinds of services that we want. All too often, they just do it because they can.

On the other hand, I have no problems with Google collecting the kinds of data that provide their advanced services, so long as I can choose when that data is collected, and I can inspect and delete it on demand.

The google.com/myactivity portal provides those abilities and a lot more.

This does imply taking some responsibility for managing your own data. Google gives you the tools to do so — you have nobody but yourself to blame if you refuse to avail yourself of those excellent tools.

Or to put it another way, if you want to use and benefit from 21st century technological magic, you really do need to be willing to learn at least a little bit about how to use the shiny wand that the wizard handed over to you.

Abracadabra!

–Lauren–

Prosecute Burger King for Their Illegal Google Home Attacks in Their Ads

Someone — or more likely a bunch of someones — at Burger King and their advertising agency need to be arrested, tried, and spend some time in shackles and prison cells. They’ve likely been violating state and federal cybercrime laws with their obnoxious ad campaign purposely designed to trigger Google Home devices without the permission of those devices’ owners.

Not only has Burger King admitted that this was their purpose, they’ve been gloating about changing their ads to avoid blocks that Google reportedly put in place to try protect Google Home device owners from being subjected to Burger King’s criminal intrusions.

For example, the federal CFAA (Computer Fraud and Abuse Act) broadly prohibits anyone from accessing a computer without authorization. There’s no doubt that Google Home and its associated Google-based systems are computers, and I know that I didn’t give Burger King permission to access and use my Google Home or my associated Google account. Nor did millions of other users. And it’s obvious that Google didn’t give that permission either. Yet the morons at Burger King and their affiliated advertising asses — in their search for social “buzz” regarding their nauseating fast food products — felt no compunction about literally hijacking the Google Home systems of potentially millions of people, interrupting other activities, and ideally (that is, ideally from their sick standpoint) interfering with people’s home environments on a massive scale.

This isn’t a case of a stray “Hey Google” triggering the devices. This was a targeted, specific attack on users, which Burger King then modified to bypass changes that Google apparently put in place when word of those ads circulated earlier.

Burger King has instantly become the “poster child” for mass, criminal abuse of these devices.  And with their lack of consideration for the sanctity of people’s homes, we might assume that they’re already making jokes about trying to find ways to bill burgers to your credit card without your permission as well. For other dark forces watching these events, this idea could be far more than a joke.

While there are some humorous aspects to this situation — like the anti-Burger King changes made on Wikipedia in response to news of these upcoming ads — the overall situation really isn’t funny at all.

In fact, it was a direct and voluntary violation of law. It was accessing and using computers without permission. Whether or not anyone associated with this illicit stunt actually gets prosecuted is a different matter, but I urge the appropriate authorities to seriously explore this possibility, both for the action itself and relating to the precedent it created for future attacks.

And of course, don’t buy anything from those jerks at Burger King. Ever.

–Lauren–

You Can Make the New Google+ Work Better — If You’re Borg!

Recently, in Google+ and the Notifications Meltdown, I noted the abysmal user experience represented by the new Google+ unified desktop notifications panel — especially for users like me with many G+ followers and high numbers of notifications.

Since then, one observer mentioned to me that opening and closing the notifications panel seemed to load more notifications. I had noticed this myself earlier, but the technique appeared to be unreliable with erratic results, and with large numbers of notifications still being “orphaned” on the useless standalone G+ notifications page.

After a bunch more time wasted on digging into this, I now seem to have a methodology that will (for now at least … maybe) reliably permit users to see all G+ notifications on the desktop notifications panel, in a manner that permits interacting with them that is much less hassle than the standalone notifications page permits.

There’s just one catch. You pretty much have to be Borg-like in your precision to make this work. You can just call me “One of One” for the remainder of this post.

Keeping in mind that this is a “How-to” guide, not a “What the hell is going on?” guide, let’s begin your assimilation.

The new notifications panel will typically display up to around 10 G+ notification “tiles” when it’s opened by clicking on the red G+ notification circle. If you interact in any way with any specific tile, G+ now usually considers it as “read” and you frequently can’t see it again unless you go to the even more painful standalone notifications page.

Here’s my full recommended procedure. Wander from this path at your own risk.

Open the panel on your desktop by clicking the red circle with the notifications count inside. Click on the bottom-most tile. That notification will open. Interact with it as you might desire — add comments, delete spam, etc.

Now, assuming that there’s more than one notification, click the up-arrow at the top of the panel to proceed upward to the next notification. You can also go back downward with the down-arrow, but do NOT at this time touch the left-arrow at the top of the panel — you do not want to return to those tiles yet.

Continue clicking upward through the notifications using that up-arrow — the notifications will open as you proceed. This can be done quite quickly if you don’t need to add comments of your own or otherwise manage the thread — e.g., you can plow rapidly through +1 notifications.

When you reach the last (that is, the top) notification on the current panel, the up-arrow will no longer be available to click.

NOW you can use the left arrow at the top of the panel to return to the notification tiles view. When you’re back on that view, be sure that you under NO circumstances click the “X” on any of those tiles, and do NOT click on the “hamburger” icon (three horizontal lines) that removes all of the tiles. If you interact with either of those icons, whether at this stage or before working your way up through the notifications, you stand a high probability of creating “orphan” notifications that will collect forever on the standalone notifications page rather than ever being presented by the panel!

So now you’re sitting on the tile view. Click on an empty area of the G+ window OUTSIDE the panel. The panel should close.

Assuming that there are more notifications pending, click again on the red circle. The panel will reopen, and if you’ve been a good Borg you’ll see the panel repopulate with a new batch of notifications.

This exact process can be repeated (again, for the time being at least) until all of your notifications have been dealt with. If you’ve done this all precisely right, you’ll likely end up with zero unread notifications on the standalone notifications page.

That’s all there is to it! A user interface technique that any well-trained Borg can master in no time at all! But at least it’s making my G+ notifications management relatively manageable again.

Yep, resistance IS futile.

–Lauren–

Collecting Examples of YouTube Hate Speech Videos and Channels

I am collecting examples of hate speech videos on YouTube, and of YouTube channels that contain hate speech. Please use the form at:

https://vortex.com/yt-speech

to report examples of specific YouTube hate speech videos and/or the specific YouTube channels that have uploaded those videos. For YouTube channels that are predominantly filled with hate speech videos, the channel URL alone will suffice (rather than individual video URLs) and is of particular interest.

For the purposes of this study, “hate speech” is defined to be materials that a reasonable observer would feel are in violation of Google’s YouTube Community Standards Terms of Use here:

https://support.google.com/youtube/answer/2801939

For now, please only report materials that are in English, and that can be accessed publicly. All inputs on this form may be released publicly after verification as part of this project, with the exception of your (optional) name and email address, which will be kept private and will not released or used for any purposes beyond this study.

Thank you for participating in this study to better understand the nature and scope of hate speech on YouTube.

–Lauren–

“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today

Originally posted February 27 and 28, 2009:
Google’s “Failure to Communicate” vs. User Support
and
Google Ombudsman (Part II)

Greetings. There’s been a lot of buzz around the Net recently about Google Gmail outages, and this has brought back to the surface a longstanding concern about the public’s ability (or lack thereof) to communicate effectively with Google itself about problems and issues with Google services.

I’ll note right here that Google usually does provide a high level of customer support for users of their paid services. And I would assert that there’s nothing wrong with Google providing differing support levels to paying customers vs. users of their many free services.

But without a doubt, far and away, the biggest Google-related issue that people bring to me is a perceived inability to effectively communicate with Google when they have problems with free Google services — which people do now depend on in many ways, of course. These problems can range from minor to quite serious, sometimes with significant ongoing impacts, and the usual complaint is that they get no response from submissions to reporting forms or e-mailed concerns.

On numerous occasions, when people bring particular Google problems to my attention, I have passed along (when I deemed it appropriate) some of these specific problems to my own contacts at Google, and they’ve always been dealt with promptly from that point forward. But this procedure can’t help everyone with such Google-related issues, of course.

I have long advocated (both privately to Google and publicly) that Google establish some sort of public Ombudsman (likely a relatively small team) devoted specifically to help interface with the public regarding user problems — a role that requires a skillful combination of technical ability, public relations, and “triage” skills. Most large firms that interact continually with the public have teams of this sort in one form or another, often under the label “Ombudsman” (or sometimes “Office of the President”).

The unofficial response I’ve gotten from Google regarding this concept has been an expression of understanding but a definite concern about how such an effort would scale given Google’s user base.

I would never claim that doing this properly is a trivial task — far from it. But given both the horizontal and vertical scope of Google services, and the extent to which vast numbers of persons now depend on these services in their everyday personal and business lives, I would again urge Google to consider moving forward along these lines.

–Lauren–

 – – –

Greetings. In Google’s “Failure to Communicate” vs. User Support, I renewed my long-standing call for an Ombudsman “team” or equivalent communications mechanism for Google.

Subsequent reactions suggest that some readers may not be fully familiar with the Ombudsman concept, at least in the way that I use the term.

An Ombudsman is not the same thing as “customer support” per se. I am not advocating a vast new Google customer service apparatus for users of their free services. Ombudsmen (Ombudswomen? Let’s skip the politically correct linguistics for now …) aren’t who you go to when search results are slow or you can’t log in to Gmail for two hours. These sorts of generally purely technical issues are the vast majority of the time suitable for handling within the normal context of existing online reporting forms and the like. (I inadvertently may have caused some confusion on this point by introducing my previous piece with a mention of Gmail problems — but that was only meant in the sense that those problems triggered broader discussions, not a specific example of an issue appropriate for escalation to an Ombudsman.)

But there’s a whole different class of largely non-technical (or more accurately, mixed-modality) issues where Google users appear to routinely feel frustrated and impotent to deal with what they feel are very disturbing situations.

Many of these relate to perceived defamations, demeaning falsehoods, systemic attacks, and other similar concerns that some persons feel are present in various Google service data (search results, Google Groups postings, Google-hosted blog postings, YouTube, and so on).

By the time some of these people write to me, they’re apparently in tears over the situations, wondering if they should spend their paltry savings on lawyers, and generally very distraught. Their biggest immediate complaints? They don’t know who to contact at Google, or their attempts at contact via online forms and e-mail have yielded nothing but automatic replies (if that).

And herein resides the crux of the matter. I am a very public advocate of open information, and a strong opponent of censorship. I won’t litter this posting with all of the relevant links. I have however expressed concerns about the tendency of false information to reside forever in search results without mechanisms for counterbalancing arguments to be seen. In 2007 I discussed this in Search Engine Dispute Notifications: Request For Comments and subsequent postings. This is an exceedingly complex topic, with no simple solutions.

In general, my experience has been that many or most of the concerns that people bring forth in these regards are, all aspects of the situation considered fairly, not necessarily suitable for the kinds of relief that the persons involved are seeking. That is, the level of harm claimed often seems insufficient, vs. free speech and the associated rights of other parties.

However, there are dramatic, and not terribly infrequent exceptions that appear significantly egregious and serious. And when these folks can’t get a substantive reply from Google (and can’t afford a lawyer to go after the parties who actually have posted or otherwise control the information that Google is indexing or hosting) these aggrieved persons tend to be up you-know-what creek.

If you have a DMCA concern, Google will normally react to it promptly. But when the DMCA is not involved, trying to get a real response from Google about the sorts of potentially serious concerns discussed above — unless you have contacts that most people don’t have — can often seem impossible.

Google generally takes the position — a position that I basically support — that since they don’t create most content, the responsibility for the content is with the actual creator, the hosting Web sites, and so on. But Google makes their living by providing global access to those materials, and cannot be reasonably viewed as being wholly separated from associated impacts and concerns.

At the very least, even if requests for deletions, alterations, or other relief are unconvincing or rejected for any number of quite valid reasons, the persons who bring forth these concerns should not be effectively ignored. They deserve to at least get a substantive response, some sort of hearing, more than a form-letter automated reply about why their particular plea is being rejected. This principle remains true irrespective of the ultimate merits or disposition of the particular case.

And this is where the role of a Google Ombudsman could be so important — not only in terms of appropriately responding to these sorts of cases, but also to help head off the possibility of blowback via draconian regulatory or legislative actions that might cut deeply into Google’s (and their competitors) business models — a nightmare scenario that I for one don’t want to see occur.

But I do fear that unless Google moves assertively toward providing better communications channels with their users for significant issues — beyond form responses and postings in the official Google blogs, there are forces that would just love to see Google seriously damaged who will find ways to leverage these sorts of issues toward that end — evidence of this sort of positioning by some well-heeled Google haters is already visible.

Ombudsmen are all about communication. For any large firm that is constantly dealing with the public, especially one operating on the scope of Google, it’s almost impossible to have too much communication when it comes to important problems and related issues. On the other hand, too little communications, or the sense that concerned persons are being ignored, can be a penny-wise but pound-foolish course with negative consequences that could have been — even if not easily avoided– at least avoided with a degree of serious effort.

–Lauren–

The YouTube Racists Fight Back!

Somewhat earlier today I received one of those “Hey Lauren, you gotta look at this on YouTube!” emails. Prior to my recently writing What Google Needs to Do About Hate Speech, such a message was as likely to point at a particularly cute cat video or a lost episode of some 60s television series as anything else. Since that posting, however, these alerts are far more likely to direct me toward much more controversial materials.

Such was the case today. Because the YouTube racists, antisemites, and their various assorted lowlife minions are at war. They’re at war with YouTube, they’re at war with the Wall Street Journal. They’re ranting and raving and chalking up view counts on their YouTube live streams and uploads today that ordinary YouTube users would be thankful to accumulate over a number of years.

After spending some time this afternoon lifting up rotting logs to peer at the maggots infesting the seamy side of YouTube where these folks reside, here’s what’s apparently going on, as best as I can understand it right now.

The sordid gang of misfits and losers who create and support the worst of YouTube content — everybody from vile PewDiePie supporters to hardcore Nazis, are angry. They’re angry that anyone would dare to threaten the YouTube monetization streams that help support their continuing rivers of hate speech. Any moves by Google or outside entities that appear to disrupt their income stream, they characterize as efforts to “destroy the YouTube platform.”

Today’s ongoing tirade appears to have been triggered by claims that the Wall Street Journal “faked” the juxtaposition of specific major brand ads with racist videos, as part of the ongoing controversies regarding YouTube advertiser controls. It seems that the creators of these videos are claiming that the videos in question were not being monetized during the period under discussion, or otherwise couldn’t have appeared in the manner claimed by the WSJ.

This gets into a maze of twisty little passages very quickly, because when you start digging down into these ranting videos today, you quickly see how they are intertwined with gamer subcultures, right-wing “fake news” claims, pro-Trump propagandists, and other dark cults — as if the outright racism and antisemitism weren’t enough.

And this is where the true irony breaks through like a flashing neon sign. These sickos aren’t at all apologetic for their hate speech videos on YouTube, they’re simply upset when Google isn’t helping to fund them.

I’ve been very clear about this. I strongly feel that these videos should not be on YouTube at all, whether monetized or not.

For example, one of the videos being discussed today in this context involves the song “Alabama Nig—.” If you fill in the dashes and search for the result on YouTube, you’ll get many thousands of hits, all of them racist, none of which should be on YouTube in the first place.

Which all suggests that the arguments about major company ads on YouTube hate speech videos, and more broadly the issues of YouTube hate speech monetization, are indeed really just digging around the edges of the problem.

Hate speech has no place on YouTube. Period. Google’s Terms of Service for YouTube explicitly forbid racial, religious, and other forms of this garbage.

The sooner that Google seriously enforces their own YouTube terms, the sooner that we can start cleaning out this hateful rot. We’ve permitted this disease to grow for years on the Internet thanks to our “anything goes” attitude, contributing to a horrific rise in hate throughout our country, reaching all the way to the current occupant of the Oval Office and his cronies.

This must be the beginning of the end for hate speech on Youtube.

–Lauren–

My Brief Radio Discussion of the GOP’s Horrendous Internet Privacy Invasion Law

An important issue that I’ve frequently discussed here and in other venues is the manner in which Internet and other media “filter bubbles” tend to cause us to only expose ourselves to information that we already agree with — whether it’s accurate or not.

That’s one reason why I value my continuing frequent invitations to discuss technology and tech policy topics on the extremely popular late night “Coast to Coast AM” national radio show. Talk radio audiences tend to be very conservative, and the willingness  of the show to repeatedly share their air with someone like me (who doesn’t fit the typical talk show mold and who can offer a contrasting point of view) is both notable and praiseworthy.

George Noory is in my opinion the best host on radio — he makes every interview a pleasure for his guests. And while the show has been known primarily over the years for discussions of — shall we say — “speculative” topics, it also has become an important venue for serious scientists and technologists to discuss issues of importance and interest (see: Coast to Coast AM Is No Wack Job).

Near the top of the show last night I chatted with George for a few minutes about the horribly privacy-invasive new GOP legislation that permits ISPs to sell customers’ private information (including web browsing history and much more) without prior consent. This morning I’ve been receiving requests for copies of that interview, so (with the permission of the show for posting short excerpts) it’s provided below.

Here is an audio clip of the interview for download. It’s under four minutes long. Or you can play it here:

[/audi

As I told George, I’m angry about this incredibly privacy-invasive legislation . If you are too, I urge you to inform the GOP politicos who pushed this nightmare law — to borrow a phrase from the 1976 film “Network” — that you’re mad as hell and you’re not going to take this anymore!

–Lauren–

Google+ and the Notifications Meltdown

I’ve been getting emails recently from correspondents complaining that I have not responded to their comments/postings on Google+. I’ve just figured out why.

The new (Google unified) Google+ desktop notification panel is losing G+ notifications left and right. For a while I thought that all of the extra notifications I was seeing when I checked on mobile occasionally were dupes — but it turns out that most of them are notifications that were never presented to me on desktop, in vast numbers.

Right now I can find (on the essentially unusable G+ desktop standalone notifications page, which requires manually clicking to a new page for each post!) about 30 recent G+ notifications that were never presented to me in the desktop notification panel. I’m not even sure how to deal with them now in a practical manner.

This is unacceptable — you have one job to do, notifications panel, and that’s to accurately show me my damned notifications!

Also, a high percentage of the time when I click on actions in the new desktop notification panel pop-up boxes (e.g. to reply), the panel blows away and I’m thrown to a new G+ page tab.

Does anyone at G bother to stress test this stuff any more in the context of users with many followers (I have nearly 400K) who get lots of notifications? Apparently not.

Another new Google user interface triumph of form over function!

–Lauren–

How YouTube’s User Interface Helps Perpetuate Hate Speech

UPDATE (6 May 2017): The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

Computer User Interface (UI) design is both an art and a science, and can have effects on users that go far beyond the interfaces themselves. As I’ve discussed previously, e.g. in The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes, user interfaces can unintentionally act as a form of discrimination against older users or other users with special needs.

But another user interface question arises in conjunction with the current debate about hate speech on Google’s YouTube (for background, please see What Google Needs to Do About YouTube Hate Speech and How Google’s YouTube Spreads Hate).

Specifically, can user interface design unintentionally help to spread and perpetuate hate speech? The answer may be an extremely disconcerting affirmative.

A key reason why I suspect that this is indeed the case, is the large numbers of YouTube users who have told me that they didn’t even realize that they had the ability to report hate speech to YouTube/Google. And when I’ve suggested that they do so, they often reply that they don’t see any obvious way to make such a report.

Over the years it’s become more and more popular to “hide” various UI elements in menus and/or behind increasingly obscure symbols and icons. And one key problem with this approach is obvious when you think about it: If a user doesn’t even know that an option exists, can we really expect them to play “UI scavenger hunt” in an attempt to find such an option? Even more to the point, what if it’s an option that you really need to see in order to even realize that the possibility exists — for example, of reporting a YouTube hate speech video or channel?

While YouTube suffers from this problem today, that wasn’t always the case. Here’s an old YouTube “watch page” desktop UI from years ago:

An Old YouTube User Interface

Not only is there a flag icon present on the main interface (rather than having the option buried in a “More” menu (and/or under generic vertical dots or horizontal lines), but the word “Flag” is even present on the main interface to serve as a direct signal to users that flagging videos is indeed an available option!

On the current YouTube desktop UI, you have to know to go digging under a “More” menu to find a similar “Report” option. And if you didn’t know that a Report option even existed, why would you necessarily go searching around for it in the first place? The only other YouTube page location where a user might consider reporting a hate speech video is through the small generic “Feedback” link at the very bottom of the watch page — and that can be way, way down there if the video has a lot of comments.

To be effective against hate speech, a flagging/reporting option needs to be present in an obvious location on the main UI, where users will see it and know that it exists. If it’s buried or hidden in any manner, vast numbers of users won’t even realize that they have the power to report hate speech videos to Google at all (the disappointing degree to which Google actually enforces their hate speech prohibitions in their Terms of Service, I’ve discussed in the posts linked earlier in this text).

You don’t need to be a UI expert to suspect one reason why Google over time has de-emphasized obvious flag/report links on the main interface, instead relegating them to a generic “More” menu. The easier the option is to see, the more people will tend to use it, both appropriately and inappropriately — and really dealing with those abuse reports in a serious manner can be expensive in terms of code and employees.

But that’s no longer an acceptable excuse — if it ever was. Google is losing major advertisers in droves, who are no longer willing to have their ads appear next to hate speech videos that shouldn’t even be monetized, and in many cases shouldn’t even be available on YouTube at all under the existing YouTube/Google Terms of Service.

For the sake of its users and of the company itself, Google must get a handle on this situation as quickly as possible. Making sure that users are actually encouraged to report hate speech and other inappropriate videos, and that Google treats those reports appropriately and with a no-nonsense application of their own Terms of Service, are absolutely paramount.

–Lauren–

What Google Needs to Do About YouTube Hate Speech

In the four days since I wrote How Google’s YouTube Spreads Hate, where I discussed both how much I enjoyed and respected YouTube, and how unacceptable their handling of hate speech has become, a boycott by advertisers of YouTube and Google ad networks has been spreading rapidly, with some of the biggest advertisers on the planet pulling their ads over concerns about being associated with videos containing hate speech, extremist, or related content.

It’s turned into a big news story around the globe, and has certainly gotten Google’s attention.

Google has announced some changes and apparently more are in the pipeline, so far relating mostly to making it easier for advertisers to avoid having their ads appear with those sorts of content.

But let’s be very clear about this. Most of that content, much of which is on long-established YouTube channels sometimes with vast numbers of views, shouldn’t be permitted to monetize at all. And in many cases, shouldn’t be permitted on YouTube at all (by the way, it’s a common ploy for YT uploaders to ask for support via third-party sites as a mechanism to evade YT monetization disablement).

The YouTube page regarding hate speech is utterly explicit:

We encourage free speech and try to defend your right to express unpopular points of view, but we don’t permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

race or ethnic origin
religion
disability
gender
age
veteran status
sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their ethnicity.

Seems pretty clear. But in fact, YouTube is awash with racist, antisemitic, and a vast array of other videos that without question violate these terms, many on established, cross-linked YouTube channels containing nothing but such materials.

How easy is it to stumble into such garbage?

Well, for me here in the USA, the top organic (non-ad) YouTube search result for “blacks” is a video showing a car being wrecked with the title: “How Savage Are Blacks In America & Why Is Everyone Afraid To Discuss It?” — including the description “ban niggaz not guns” — and also featuring a plea to donate to a racist external site.

This video has been on YouTube for over a year and has accumulated over 1.5 million views. Hardly hiding.

While it can certainly can be legitimately argued that there are many gray areas when it comes to speech, on YouTube there are seemingly endless lists of videos that are trivially located and clearly racist, antisemitic, or in violation of YouTube hate speech terms in other ways.

And YouTube helps you find even more of them! On the right-hand suggestion panel right now for the video I mentioned above, there’s a whole list of additional racist videos, including titles like: “Why Are So Many Of U Broke, Black, B!tches Begging If They Are So Strong & Independent?” — and much worse.

Google’s proper course is clear. They must strongly enforce their own Terms of Service. It’s not enough to provide control over ads, or even ending those ads entirely. Videos and channels that are in obvious violation of the YT TOS must be removed.

We have crossed the Rubicon in terms of the Internet’s impact on society, and laissez-faire attitudes toward hate speech content are now intolerable. The world is becoming saturated in escalating hate speech and related attacks, and even tacit acceptance of these horrors — whether spread on YouTube or by the Trump White House — must be roundly and soundly condemned.

Google is a great company with great people. Now they need to grasp the nettle and do the right thing.

–Lauren–

How Google’s YouTube Spreads Hate

I am one of YouTube’s biggest fans. Seriously. It’s painful for me to imagine a world now without YouTube, without the ability to both purposely find and serendipitously discover all manner of contemporary and historical video gems. I subscribe to YouTube Red because I want to help support great YT creators (it’s an excellent value, by the way).

YouTube is perhaps the quintessential example of a nexus where virtually the entire gamut of Internet policy issues meet and mix — content creation, copyrights, fair use, government censorship, and a vast number more are in play.

The scale and technology of YouTube is nothing short of staggering, and the work required to keep it all running — in terms of both infrastructure and evolving policies, is immense. When I was consulting to Google several years ago, I saw much of this firsthand, as well as having the opportunity to meet many of the excellent people behind the scenes.

Does YouTube have problems? Of course. It would be impossible for an operation of such scope to exist without problems. What we really care about in the long run is how those problems are dealt with.

There is a continual tension between entities claiming copyrights on material and YouTube uploaders. I’ve discussed this in considerable detail in the past, so I won’t get into it again here, other than to note that it’s very easy for relatively minor claimed violations (whether actually accurate or not) to result in ordinary YouTube users having their YouTube accounts forcibly closed, without effective recourse in many cases. And while YouTube has indeed improved their appeal mechanisms in this regard over time, they still have a long way to go in terms of overall fairness.

But a far more serious problem area with YouTube has been in the news repeatedly lately — the extent to which hate speech has permeated the YouTube ecosystem, even though hate speech on YouTube is explicitly banned by Google in the terms of use on this YouTube help page.

Before proceeding, let’s set down some hopefully useful parameters to help explain what I’m talking about here.

One issue that we need to clarify at the outset. The First Amendment to the United States Constitution does not require that YouTube or any other business provide a platform for the dissemination, monetization, or spread of any particular form of speech. The First Amendment applies only to governmental restrictions on speech, which are the true meaning of the term censorship. This is why concepts such as the horrific “Right To Be Forgotten” are utterly unacceptable, as they impose governmentally enforced third-party censorship onto search results.

It’s also often suggested that it’s impossible to really identify hate speech because — some observers argue — everyone’s idea of hate speech is different. Yet from the standpoint of civilized society, we can see that this argument is largely a subterfuge.

For while there are indeed gray areas of speech where even attempting to assign such a label would be foolhardy, there are also areas of discourse where not assigning the hate speech label would require inane and utterly unjustifiable contortions of reality.

Videos from terrorist groups explicitly promoting violence are an obvious example. These are universally viewed as hate speech by all civilized people, and to their credit the major platforms like YouTube, Facebook, et al. have been increasingly leveraging advanced technology to block them, even at the enormous “whack-a-mole” scales at which they’re uploaded.

But now we move on to other varieties of hate speech that have contaminated YouTube and other firms. And while they’re not usually as explicitly violent as terrorist videos, they’re likely even more destructive to society in the long run, with their pervasive nature now even penetrating to the depths of the White House.

Before the rise of video and social media platforms on the Internet, we all knew that vile racists and antisemites existed, but without effective means to organize they tended to be restricted to their caves in Idaho or their Klan clubhouses in the Deep South. With only mimeograph and copy machines available to perpetuate their postal-distributed raving-infested newsletters, their influence was mercifully limited.

The Internet changed all that, by creating wholly new communications channels that permitted these depraved personalities to coordinate and disseminate in ways that are orders of magnitude more effective, and so vastly increasing the dangers that they represent to decent human beings.

Books could be written about the entire scope of this contamination, but this post is about YouTube’s role, so let’s return to that now.

In recent weeks the global media spotlight has repeatedly shined on Google’s direct financial involvement with established hate speech channels on YouTube.

First came the PewDiePie controversy. As YouTube’s most-subscribed star, his continuing dabbling in antisemitic videos — which he insists are just “jokes” even as his Hitler-worship continues — exposed YouTube’s intertwining with such behavior to an extent that Google found itself in a significant public relations mess. This forced Google to take some limited enforcement actions against his YouTube channel. Yet the channel is still up on YouTube. And still monetizing.

Google is in something of a bind here. Having created this jerk, who now represents a significant income stream to himself and the company, it would be difficult to publicly admit that his style of hate is still exceedingly dangerous, as it helps to normalize such sickening concepts. This is true even if we accept for the sake of the argument that he actually means it in a purely “joking” way (I don’t personally believe that this is actually the case, however). For historical precedent, one need only look at how the antisemitic “jokes” in 1930s Germany became a springboard to global horror.

But let’s face it, Google really doesn’t want to give up that income stream by completely demonetizing PewDiePie or removing his channels completely, nor do they want to trigger his army of obscene and juvenile moronic trolls and a possible backlash against YouTube or Google more broadly.

Yet from an ethical standpoint these are precisely the sorts of actions that Google should be taking, since — as I mentioned above — “ordinary” YouTube users routinely can lose their monetization privileges — or be thrown off of YouTube completely — for even relatively minor accused violations of the YouTube or Google Terms of Service.

There’s worse of course. If we term PewDiePie’s trash as relatively “soft” hate speech, we then must look to the even more serious hate speech that also consumes significant portions of YouTube.

I’m not going to give any of these fiends any “link juice” by naming them here. But it’s trivial to find nearly limitless arrays of horrible hate speech videos on YouTube under the names of both major and minor figures in the historical and contemporary racist/antisemitic/alt-right movements.

A truly disturbing aspect is that once you find your way into this depraved area of YouTube, you discover that many of these videos are fully monetized, meaning that Google is actually helping to fund this evil — and is profiting from it.

Perhaps equally awful, if you hit one of these videos’ watch pages, YouTube’s highly capable suggestion engine will offer you a continuous recommended stream of similar hate videos over on the right-hand side of the page — even helpfully surfacing additional hate speech channels for your enjoyment. I assume that if you watched enough of these, the suggestion panels on the YouTube home page would also feature these videos for you.

Google’s involvement with such YouTube channels became significant news over the last couple of weeks, as major entities in the United Kingdom angrily pulled their advertising after finding it featured on the channels of these depraved hatemongers. Google quickly announced that they’d provide advertisers with more controls to help avoid this in the future, but this implicitly suggests that Google doesn’t plan actions against the channels themselves, and Google’s “we don’t always get it right” excuse is wearing very, very thin given the seriousness of the situation.

Even if we completely inappropriately consider such hate speech to be under the umbrella of acceptable speech, what we see on YouTube today in this context is not merely providing a “simple” platform for hate speech — it’s providing financial resources for hate speech organizations, and directly helping to spread their messages of hate.

I explicitly assume that this has not been Google’s intention per se. Google has tried to take a “hands off” attitude toward “judging” YouTube videos as much as possible. But the massive rise in hate-based speech and attacks around the world, including (at least tacitly) to the highest levels of the U.S. federal government under the Trump administration, are clear and decisive signals that this is no longer a viable course for an ethical and great company like Google.

It’s time for Google to extricate YouTube from its role as a partner in hate. That this won’t come without significant pain and costs is a given.

But it’s absolutely the correct path for Google to take — and we expect no less from Google.

–Lauren–

Google and Older Users

Alphabet/Google needs at least one employee dedicated to vetting their products on a continuing basis for usability by older users — an important and rapidly growing demographic of users who are increasingly dependent on Google services in their daily lives.

I’m not talking here about accessibility in general, I’m talking about someone whose job is specifically to make sure that Google’s services don’t leave older users behind due to user interface and/or other associated issues. Otherwise, Google is essentially behaving in a discriminatory manner, and the last thing that I or they should want to see is the government stepping in (via the ADA or other routes) to mandate changes.

–Lauren–