My Brief Radio Discussion of the GOP’s Horrendous Internet Privacy Invasion Law

An important issue that I’ve frequently discussed here and in other venues is the manner in which Internet and other media “filter bubbles” tend to cause us to only expose ourselves to information that we already agree with — whether it’s accurate or not.

That’s one reason why I value my continuing frequent invitations to discuss technology and tech policy topics on the extremely popular late night “Coast to Coast AM” national radio show. Talk radio audiences tend to be very conservative, and the willingness  of the show to repeatedly share their air with someone like me (who doesn’t fit the typical talk show mold and who can offer a contrasting point of view) is both notable and praiseworthy.

George Noory is in my opinion the best host on radio — he makes every interview a pleasure for his guests. And while the show has been known primarily over the years for discussions of — shall we say — “speculative” topics, it also has become an important venue for serious scientists and technologists to discuss issues of importance and interest (see: Coast to Coast AM Is No Wack Job).

Near the top of the show last night I chatted with George for a few minutes about the horribly privacy-invasive new GOP legislation that permits ISPs to sell customers’ private information (including web browsing history and much more) without prior consent. This morning I’ve been receiving requests for copies of that interview, so (with the permission of the show for posting short excerpts) it’s provided below.

Here is an audio clip of the interview for download. It’s under four minutes long. Or you can play it here:

[/audi

As I told George, I’m angry about this incredibly privacy-invasive legislation . If you are too, I urge you to inform the GOP politicos who pushed this nightmare law — to borrow a phrase from the 1976 film “Network” — that you’re mad as hell and you’re not going to take this anymore!

–Lauren–

Google+ and the Notifications Meltdown

I’ve been getting emails recently from correspondents complaining that I have not responded to their comments/postings on Google+. I’ve just figured out why.

The new (Google unified) Google+ desktop notification panel is losing G+ notifications left and right. For a while I thought that all of the extra notifications I was seeing when I checked on mobile occasionally were dupes — but it turns out that most of them are notifications that were never presented to me on desktop, in vast numbers.

Right now I can find (on the essentially unusable G+ desktop standalone notifications page, which requires manually clicking to a new page for each post!) about 30 recent G+ notifications that were never presented to me in the desktop notification panel. I’m not even sure how to deal with them now in a practical manner.

This is unacceptable — you have one job to do, notifications panel, and that’s to accurately show me my damned notifications!

Also, a high percentage of the time when I click on actions in the new desktop notification panel pop-up boxes (e.g. to reply), the panel blows away and I’m thrown to a new G+ page tab.

Does anyone at G bother to stress test this stuff any more in the context of users with many followers (I have nearly 400K) who get lots of notifications? Apparently not.

Another new Google user interface triumph of form over function!

–Lauren–

How YouTube’s User Interface Helps Perpetuate Hate Speech

UPDATE (6 May 2017): The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

Computer User Interface (UI) design is both an art and a science, and can have effects on users that go far beyond the interfaces themselves. As I’ve discussed previously, e.g. in The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes, user interfaces can unintentionally act as a form of discrimination against older users or other users with special needs.

But another user interface question arises in conjunction with the current debate about hate speech on Google’s YouTube (for background, please see What Google Needs to Do About YouTube Hate Speech and How Google’s YouTube Spreads Hate).

Specifically, can user interface design unintentionally help to spread and perpetuate hate speech? The answer may be an extremely disconcerting affirmative.

A key reason why I suspect that this is indeed the case, is the large numbers of YouTube users who have told me that they didn’t even realize that they had the ability to report hate speech to YouTube/Google. And when I’ve suggested that they do so, they often reply that they don’t see any obvious way to make such a report.

Over the years it’s become more and more popular to “hide” various UI elements in menus and/or behind increasingly obscure symbols and icons. And one key problem with this approach is obvious when you think about it: If a user doesn’t even know that an option exists, can we really expect them to play “UI scavenger hunt” in an attempt to find such an option? Even more to the point, what if it’s an option that you really need to see in order to even realize that the possibility exists — for example, of reporting a YouTube hate speech video or channel?

While YouTube suffers from this problem today, that wasn’t always the case. Here’s an old YouTube “watch page” desktop UI from years ago:

An Old YouTube User Interface

Not only is there a flag icon present on the main interface (rather than having the option buried in a “More” menu (and/or under generic vertical dots or horizontal lines), but the word “Flag” is even present on the main interface to serve as a direct signal to users that flagging videos is indeed an available option!

On the current YouTube desktop UI, you have to know to go digging under a “More” menu to find a similar “Report” option. And if you didn’t know that a Report option even existed, why would you necessarily go searching around for it in the first place? The only other YouTube page location where a user might consider reporting a hate speech video is through the small generic “Feedback” link at the very bottom of the watch page — and that can be way, way down there if the video has a lot of comments.

To be effective against hate speech, a flagging/reporting option needs to be present in an obvious location on the main UI, where users will see it and know that it exists. If it’s buried or hidden in any manner, vast numbers of users won’t even realize that they have the power to report hate speech videos to Google at all (the disappointing degree to which Google actually enforces their hate speech prohibitions in their Terms of Service, I’ve discussed in the posts linked earlier in this text).

You don’t need to be a UI expert to suspect one reason why Google over time has de-emphasized obvious flag/report links on the main interface, instead relegating them to a generic “More” menu. The easier the option is to see, the more people will tend to use it, both appropriately and inappropriately — and really dealing with those abuse reports in a serious manner can be expensive in terms of code and employees.

But that’s no longer an acceptable excuse — if it ever was. Google is losing major advertisers in droves, who are no longer willing to have their ads appear next to hate speech videos that shouldn’t even be monetized, and in many cases shouldn’t even be available on YouTube at all under the existing YouTube/Google Terms of Service.

For the sake of its users and of the company itself, Google must get a handle on this situation as quickly as possible. Making sure that users are actually encouraged to report hate speech and other inappropriate videos, and that Google treats those reports appropriately and with a no-nonsense application of their own Terms of Service, are absolutely paramount.

–Lauren–

What Google Needs to Do About YouTube Hate Speech

In the four days since I wrote How Google’s YouTube Spreads Hate, where I discussed both how much I enjoyed and respected YouTube, and how unacceptable their handling of hate speech has become, a boycott by advertisers of YouTube and Google ad networks has been spreading rapidly, with some of the biggest advertisers on the planet pulling their ads over concerns about being associated with videos containing hate speech, extremist, or related content.

It’s turned into a big news story around the globe, and has certainly gotten Google’s attention.

Google has announced some changes and apparently more are in the pipeline, so far relating mostly to making it easier for advertisers to avoid having their ads appear with those sorts of content.

But let’s be very clear about this. Most of that content, much of which is on long-established YouTube channels sometimes with vast numbers of views, shouldn’t be permitted to monetize at all. And in many cases, shouldn’t be permitted on YouTube at all (by the way, it’s a common ploy for YT uploaders to ask for support via third-party sites as a mechanism to evade YT monetization disablement).

The YouTube page regarding hate speech is utterly explicit:

We encourage free speech and try to defend your right to express unpopular points of view, but we don’t permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

race or ethnic origin
religion
disability
gender
age
veteran status
sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their ethnicity.

Seems pretty clear. But in fact, YouTube is awash with racist, antisemitic, and a vast array of other videos that without question violate these terms, many on established, cross-linked YouTube channels containing nothing but such materials.

How easy is it to stumble into such garbage?

Well, for me here in the USA, the top organic (non-ad) YouTube search result for “blacks” is a video showing a car being wrecked with the title: “How Savage Are Blacks In America & Why Is Everyone Afraid To Discuss It?” — including the description “ban niggaz not guns” — and also featuring a plea to donate to a racist external site.

This video has been on YouTube for over a year and has accumulated over 1.5 million views. Hardly hiding.

While it can certainly can be legitimately argued that there are many gray areas when it comes to speech, on YouTube there are seemingly endless lists of videos that are trivially located and clearly racist, antisemitic, or in violation of YouTube hate speech terms in other ways.

And YouTube helps you find even more of them! On the right-hand suggestion panel right now for the video I mentioned above, there’s a whole list of additional racist videos, including titles like: “Why Are So Many Of U Broke, Black, B!tches Begging If They Are So Strong & Independent?” — and much worse.

Google’s proper course is clear. They must strongly enforce their own Terms of Service. It’s not enough to provide control over ads, or even ending those ads entirely. Videos and channels that are in obvious violation of the YT TOS must be removed.

We have crossed the Rubicon in terms of the Internet’s impact on society, and laissez-faire attitudes toward hate speech content are now intolerable. The world is becoming saturated in escalating hate speech and related attacks, and even tacit acceptance of these horrors — whether spread on YouTube or by the Trump White House — must be roundly and soundly condemned.

Google is a great company with great people. Now they need to grasp the nettle and do the right thing.

–Lauren–

How Google’s YouTube Spreads Hate

I am one of YouTube’s biggest fans. Seriously. It’s painful for me to imagine a world now without YouTube, without the ability to both purposely find and serendipitously discover all manner of contemporary and historical video gems. I subscribe to YouTube Red because I want to help support great YT creators (it’s an excellent value, by the way).

YouTube is perhaps the quintessential example of a nexus where virtually the entire gamut of Internet policy issues meet and mix — content creation, copyrights, fair use, government censorship, and a vast number more are in play.

The scale and technology of YouTube is nothing short of staggering, and the work required to keep it all running — in terms of both infrastructure and evolving policies, is immense. When I was consulting to Google several years ago, I saw much of this firsthand, as well as having the opportunity to meet many of the excellent people behind the scenes.

Does YouTube have problems? Of course. It would be impossible for an operation of such scope to exist without problems. What we really care about in the long run is how those problems are dealt with.

There is a continual tension between entities claiming copyrights on material and YouTube uploaders. I’ve discussed this in considerable detail in the past, so I won’t get into it again here, other than to note that it’s very easy for relatively minor claimed violations (whether actually accurate or not) to result in ordinary YouTube users having their YouTube accounts forcibly closed, without effective recourse in many cases. And while YouTube has indeed improved their appeal mechanisms in this regard over time, they still have a long way to go in terms of overall fairness.

But a far more serious problem area with YouTube has been in the news repeatedly lately — the extent to which hate speech has permeated the YouTube ecosystem, even though hate speech on YouTube is explicitly banned by Google in the terms of use on this YouTube help page.

Before proceeding, let’s set down some hopefully useful parameters to help explain what I’m talking about here.

One issue that we need to clarify at the outset. The First Amendment to the United States Constitution does not require that YouTube or any other business provide a platform for the dissemination, monetization, or spread of any particular form of speech. The First Amendment applies only to governmental restrictions on speech, which are the true meaning of the term censorship. This is why concepts such as the horrific “Right To Be Forgotten” are utterly unacceptable, as they impose governmentally enforced third-party censorship onto search results.

It’s also often suggested that it’s impossible to really identify hate speech because — some observers argue — everyone’s idea of hate speech is different. Yet from the standpoint of civilized society, we can see that this argument is largely a subterfuge.

For while there are indeed gray areas of speech where even attempting to assign such a label would be foolhardy, there are also areas of discourse where not assigning the hate speech label would require inane and utterly unjustifiable contortions of reality.

Videos from terrorist groups explicitly promoting violence are an obvious example. These are universally viewed as hate speech by all civilized people, and to their credit the major platforms like YouTube, Facebook, et al. have been increasingly leveraging advanced technology to block them, even at the enormous “whack-a-mole” scales at which they’re uploaded.

But now we move on to other varieties of hate speech that have contaminated YouTube and other firms. And while they’re not usually as explicitly violent as terrorist videos, they’re likely even more destructive to society in the long run, with their pervasive nature now even penetrating to the depths of the White House.

Before the rise of video and social media platforms on the Internet, we all knew that vile racists and antisemites existed, but without effective means to organize they tended to be restricted to their caves in Idaho or their Klan clubhouses in the Deep South. With only mimeograph and copy machines available to perpetuate their postal-distributed raving-infested newsletters, their influence was mercifully limited.

The Internet changed all that, by creating wholly new communications channels that permitted these depraved personalities to coordinate and disseminate in ways that are orders of magnitude more effective, and so vastly increasing the dangers that they represent to decent human beings.

Books could be written about the entire scope of this contamination, but this post is about YouTube’s role, so let’s return to that now.

In recent weeks the global media spotlight has repeatedly shined on Google’s direct financial involvement with established hate speech channels on YouTube.

First came the PewDiePie controversy. As YouTube’s most-subscribed star, his continuing dabbling in antisemitic videos — which he insists are just “jokes” even as his Hitler-worship continues — exposed YouTube’s intertwining with such behavior to an extent that Google found itself in a significant public relations mess. This forced Google to take some limited enforcement actions against his YouTube channel. Yet the channel is still up on YouTube. And still monetizing.

Google is in something of a bind here. Having created this jerk, who now represents a significant income stream to himself and the company, it would be difficult to publicly admit that his style of hate is still exceedingly dangerous, as it helps to normalize such sickening concepts. This is true even if we accept for the sake of the argument that he actually means it in a purely “joking” way (I don’t personally believe that this is actually the case, however). For historical precedent, one need only look at how the antisemitic “jokes” in 1930s Germany became a springboard to global horror.

But let’s face it, Google really doesn’t want to give up that income stream by completely demonetizing PewDiePie or removing his channels completely, nor do they want to trigger his army of obscene and juvenile moronic trolls and a possible backlash against YouTube or Google more broadly.

Yet from an ethical standpoint these are precisely the sorts of actions that Google should be taking, since — as I mentioned above — “ordinary” YouTube users routinely can lose their monetization privileges — or be thrown off of YouTube completely — for even relatively minor accused violations of the YouTube or Google Terms of Service.

There’s worse of course. If we term PewDiePie’s trash as relatively “soft” hate speech, we then must look to the even more serious hate speech that also consumes significant portions of YouTube.

I’m not going to give any of these fiends any “link juice” by naming them here. But it’s trivial to find nearly limitless arrays of horrible hate speech videos on YouTube under the names of both major and minor figures in the historical and contemporary racist/antisemitic/alt-right movements.

A truly disturbing aspect is that once you find your way into this depraved area of YouTube, you discover that many of these videos are fully monetized, meaning that Google is actually helping to fund this evil — and is profiting from it.

Perhaps equally awful, if you hit one of these videos’ watch pages, YouTube’s highly capable suggestion engine will offer you a continuous recommended stream of similar hate videos over on the right-hand side of the page — even helpfully surfacing additional hate speech channels for your enjoyment. I assume that if you watched enough of these, the suggestion panels on the YouTube home page would also feature these videos for you.

Google’s involvement with such YouTube channels became significant news over the last couple of weeks, as major entities in the United Kingdom angrily pulled their advertising after finding it featured on the channels of these depraved hatemongers. Google quickly announced that they’d provide advertisers with more controls to help avoid this in the future, but this implicitly suggests that Google doesn’t plan actions against the channels themselves, and Google’s “we don’t always get it right” excuse is wearing very, very thin given the seriousness of the situation.

Even if we completely inappropriately consider such hate speech to be under the umbrella of acceptable speech, what we see on YouTube today in this context is not merely providing a “simple” platform for hate speech — it’s providing financial resources for hate speech organizations, and directly helping to spread their messages of hate.

I explicitly assume that this has not been Google’s intention per se. Google has tried to take a “hands off” attitude toward “judging” YouTube videos as much as possible. But the massive rise in hate-based speech and attacks around the world, including (at least tacitly) to the highest levels of the U.S. federal government under the Trump administration, are clear and decisive signals that this is no longer a viable course for an ethical and great company like Google.

It’s time for Google to extricate YouTube from its role as a partner in hate. That this won’t come without significant pain and costs is a given.

But it’s absolutely the correct path for Google to take — and we expect no less from Google.

–Lauren–

Google and Older Users

Alphabet/Google needs at least one employee dedicated to vetting their products on a continuing basis for usability by older users — an important and rapidly growing demographic of users who are increasingly dependent on Google services in their daily lives.

I’m not talking here about accessibility in general, I’m talking about someone whose job is specifically to make sure that Google’s services don’t leave older users behind due to user interface and/or other associated issues. Otherwise, Google is essentially behaving in a discriminatory manner, and the last thing that I or they should want to see is the government stepping in (via the ADA or other routes) to mandate changes.

–Lauren–

“Google Experiences” Submission Page Now Available

Recently in Please Tell Me Your Google Experiences For “Google 2017” Report, I solicited experiences with Google — positive, negative, neutral, or whatever — for my upcoming “Google 2017” white paper report.

The response level has been very high and has led me to create a shared, public Google Doc to help organize such submissions.

Please visit the Google Experiences Suggestions Page to access that document, through which you may submit suggested text and/or other information. You do not need to be logged into a Google account to do this.

Thanks again very much for your participation in this effort!

–Lauren–

Simple Solutions to “Smart TVs” as CIA Spies

I’m being bombarded with queries about Samsung “Smart TVs” being used as bugs by the CIA, as discussed in the new WikiLeaks data dump.

I’m not in a position to write up anything lengthy about this right now, but there is a simple solution to the entire “smart TV as bug” category of concerns — don’t buy those TVs, and if you have one, don’t connect it to the Internet directly.

Don’t associate it with your Wi-Fi network — don’t plug it into your Ethernet.

Buy a Chromecast or Roku or similar dongle that will provide your Internet programming connectivity via HDMI to that television — these dongles don’t include microphones and are dirt cheap compared to the price of the TV itself.

In general, so-called smart TVs are not a good buy even when they’re not acting as bugs.

Now, seriously paranoid readers might ask “Well, what if the spooks are subverting both my smart TV and my external dongle? Couldn’t they somehow route the audio from the TV microphone back out to the Internet through hacked firmware in the dongles?”

The answer is theoretically yes, but it’s a significantly tougher lift for a number of technical reasons. The solution though even for that scenario is simple — kill the power to the dongle when you’re not using it.

Unplug it from the TV USB jack if you’re powering it that way (I mean, if you’re paranoid, you might consider the possibility that the hacked TV firmware is still supplying power to the dongle even when it’s supposed to be off, and that the dongle has been hacked to not light its power LED in that situation, eh?)

But if you’re powering the dongle from a wall adapter, and you unplug that, you’ve pretty much ended that ballgame.

–Lauren–