More Regarding a Terrible Decision by the Internet Archive

Yesterday, in A Terrible Decision by the Internet Archive May Lead to Widespread Blocking, I discussed in detail why the Internet Archive’s decision to ignore Robots Exclusion Standard (RES) directives (in robots.txt files on websites) is terrible for the Internet community and users. I had expected a deluge of hate email in response. But I’ve received no negative reactions at all — rather a range of useful questions and comments — perhaps emphasizing the fact that the importance of the RES is widely recognized.

As I did yesterday, I’ll emphasize again here that the Archive has done a lot of good over many years, that it’s been an extremely valuable resource in more ways than I have time to list right now. Nor am I asserting that the Archive itself has evil motives for its decision. However, I strongly feel that their decision allies them with the dark players of the Net, and gives such scumbags comfort and encouragement.

One polite public message that I received was apparently authored by Internet Archive founder Brewster Kahle (since the message came in via my blog, I have not been able to immediately authenticate it, but the IP address seemed reasonable). He noted that the Archive accepts requests via email to have pages excluded.

This is of course useful, but entirely inadequate.

Most obviously, this technique fails miserably at scale. The whole point of the RES is to provide a publicly inspectable, unified and comprehensively defined method to inform other sites (individually, en masse, or in various combinations) of your site access determinations.

The “send an email note to this address” technique just can’t fly at Internet scale, even if users assume that those emails will ever actually be viewed at any given site. (Remember when “postmaster@” addresses would reliably reach human beings? Yeah, a long, long time ago.)

There’s also been some fascinating discussion regarding the existing legal status of the RES. While it apparently hasn’t been specifically tested in a legal sense here in the USA at least, judges have still been recognizing the importance of RES in various court decisions.

In 2006, Google was sued (“Field vs. Google” — Nevada) for copyright infringement for spidering and caching a website. The court found for Google, noting that the site included a robots.txt file that permitted such access by Google.

The case of Century 21 vs. Zoocasa (2011 — British Columbia) is also illuminating. In this case, the judge found against Zoocasa, noting that they had disregarded robots.txt directives that prohibited their copying content from the Century 21 site.

So it appears that even today, ignoring RES robots.txt files could mean skating on very thin ice from a legal standpoint.

The best course all around would be for the Internet Archive to reverse their decision, and pledge to honor RES directives, as honorable players in the Internet ecosystem are expected to do. It would be a painful shame if the wonderful legacy of the Internet Archive were to be so seriously tarnished going forward by a single (but very serious) bad judgment call.

–Lauren–

A Terrible Decision by the Internet Archive May Lead to Widespread Blocking

UPDATE (23 April 2017):  More Regarding a Terrible Decision by the Internet Archive

– – –

We can stipulate at the outset that the venerable Internet Archive and its associated systems like Wayback Machine have done a lot of good for many years — for example by providing chronological archives of websites who have chosen to participate in their efforts. But now, it appears that the Internet Archive has joined the dark side of the Internet, by announcing that they will no longer honor the access control requests of any websites.

For any given site, the decision to participate or not with the web scanning systems at the Internet Archive (or associated with any other “spidering” system) is indicated by use of the well established and very broadly affirmed “Robots Exclusion Standard” (RES) — a methodology that uses files named “robots.txt” to inform visiting scanning systems which parts of a given website should or should not be subject to spidering and/or archiving by automated scanners.

RES operates on the honor system. It requests that spidering systems follow its directives, which may be simple or detailed, depending on the situation — with those detailed directives defined comprehensively in the standard itself.

While RES generally has no force of law, it has enormous legal implications. The existence of RES — that is, a recognized means for public sites to indicate access preferences — has been important for many years to help hold off efforts in various quarters to charge search engines and/or other classes of users for access that is free to everyone else. The straightforward argument that sites already have a way — via the RES — to indicate their access preferences has held a lot of rabid lawyers at bay.

And there are lots of completely legitimate reasons for sites to use RES to control spidering access, especially for (but by no means restricted to) sites with limited resources. These include technical issues (such as load considerations relating to resource-intensive databases and a range of other related situations), legal issues such as court orders, and a long list of other technical and policy concerns that most of us rarely think about, but that can be of existential importance to many sites.

Since adherence to the RES has usually been considered to be voluntary, an argument can be made (and we can pretty safely assume that the Archive’s reasoning falls into this category one way or another) that since “bad” players might choose to ignore the standard, this puts “good” players who abide by the standard at a disadvantage.

But this is a traditional, bogus argument that we hear whenever previously ethical entities feel the urge to start behaving unethically: “Hell, if the bad guys are breaking the law with impunity, why can’t we as well? After all, our motives are much better than theirs!”

Therein are the storied paths of “good intentions” that lead to hell, when the floodgates of such twisted illogic open wide, as a flood of other players decide that they must emulate the Internet Archive’s dismal reasoning to remain competitive.

There’s much more.

While RES is typically viewed as not having legal force today, that could be changed, perhaps with relative ease in many circumstances. There are no obvious First Amendment considerations in play, so it would seem quite feasible to roll “Adherence to properly published RES directives” into existing cybercrime-related site access authorization definitions.

Nor are individual sites entirely helpless against the Internet Archive’s apparent embracing of the dark side in this regard.

Unless the Archive intends to try go completely into a “ghost” mode, their spidering agents will still be detectable at the http/https protocol levels, and could be blocked (most easily in their entirety) with relatively simple web server configuration directives. If the Archive attempted to cloak their agent names, individual sites could block the Archive by referencing the Archive’s known source IP addresses instead.

It doesn’t take a lot of imagination to see how all of this could quickly turn into an escalating nightmare of “Whac-A-Mole” and expanding blocks, many of which would likely negatively impact unrelated sites as collateral damage.

Even before the Internet Archive’s decision, this class of access and archiving issues had been smoldering for quite some time. Perhaps the Internet Archive’s pouring of rocket fuel onto those embers may ultimately lead to a legally enforced Robots Exclusion Standard — with both the positive and negative ramifications that would then be involved. There are likely to be other associated legal battles as well.

But in the shorter term at least, the Internet Archive’s decision is likely to leave a lot of innocent sites and innocent users quite badly burned.

–Lauren–

The Google Page That Google Haters Don’t Want You to Know About

UPDATE (May 1, 2019): A Major New Privacy-Positive Move by Google

UPDATE (April 24, 2017):  Quick Tutorial: Deleting Your Data Using Google’s “My Activity”

– – –

There’s a page at Google that dedicated Google Haters don’t like to talk about. In fact, they’d prefer that you didn’t even know that it exists, because it seriously undermines the foundation of their hateful anti-Google fantasies.

A core principle of Google hatred is the set of false memes concerning Google and user data collection. This is frequently encapsulated in a fanciful “You are the product!” slogan, despite the fact that (unlike the dominant ISPs and many other large firms) Google never sells user data to third parties.

But the haters hate the idea that data is collected at all, despite the fact that such data is crucial for Google services to function at the quality levels that we have come to expect from Google.

I was thinking about this again today when I started hearing from users reacting to Google’s announcement of multiple user support for Google Home, who were expressing concerns about collection of more individualized voice data (without which — I would note — you couldn’t differentiate between different users).

We can stipulate that Google collects a lot of data to make all of this stuff work. But here’s the kicker that the haters don’t want you to think about — Google also gives you enormous control over that data, to a staggering degree that most Google users don’t fully realize.

The Golden Ticket gateway to this goodness is at:

google.com/myactivity

There’s a lot to explore there — be sure to click on both the three vertical dots near the upper top and on the three horizontal bars near the upper left to see the full range of options available.

This page is a portal to an incredible resource. Not only does it give you the opportunity to see in detail the data that Google has associated with you across the universe of Google products, but also the ability to delete that data (selectively or in its totality), and to determine how much of your data will be collected going forward for the various Google services.

On top of that, there are links over to other data related systems that you can control, such as Takeout for downloading your data from Google, comprehensive ad preferences settings (which you can use to adjust or even fully disable ad personalization), and an array of other goodies, all supported by excellent help pages — a lot of thought and work went into this.

I’m a pragmatist by nature. I worry about organizations that don’t give us control over the data they collect about us — like the government, like those giant ISPs and lots of other firms. And typically, these kinds of entities collect this data even though they don’t actually need it to provide the kinds of services that we want. All too often, they just do it because they can.

On the other hand, I have no problems with Google collecting the kinds of data that provide their advanced services, so long as I can choose when that data is collected, and I can inspect and delete it on demand.

The google.com/myactivity portal provides those abilities and a lot more.

This does imply taking some responsibility for managing your own data. Google gives you the tools to do so — you have nobody but yourself to blame if you refuse to avail yourself of those excellent tools.

Or to put it another way, if you want to use and benefit from 21st century technological magic, you really do need to be willing to learn at least a little bit about how to use the shiny wand that the wizard handed over to you.

Abracadabra!

–Lauren–

Prosecute Burger King for Their Illegal Google Home Attacks in Their Ads

Someone — or more likely a bunch of someones — at Burger King and their advertising agency need to be arrested, tried, and spend some time in shackles and prison cells. They’ve likely been violating state and federal cybercrime laws with their obnoxious ad campaign purposely designed to trigger Google Home devices without the permission of those devices’ owners.

Not only has Burger King admitted that this was their purpose, they’ve been gloating about changing their ads to avoid blocks that Google reportedly put in place to try protect Google Home device owners from being subjected to Burger King’s criminal intrusions.

For example, the federal CFAA (Computer Fraud and Abuse Act) broadly prohibits anyone from accessing a computer without authorization. There’s no doubt that Google Home and its associated Google-based systems are computers, and I know that I didn’t give Burger King permission to access and use my Google Home or my associated Google account. Nor did millions of other users. And it’s obvious that Google didn’t give that permission either. Yet the morons at Burger King and their affiliated advertising asses — in their search for social “buzz” regarding their nauseating fast food products — felt no compunction about literally hijacking the Google Home systems of potentially millions of people, interrupting other activities, and ideally (that is, ideally from their sick standpoint) interfering with people’s home environments on a massive scale.

This isn’t a case of a stray “Hey Google” triggering the devices. This was a targeted, specific attack on users, which Burger King then modified to bypass changes that Google apparently put in place when word of those ads circulated earlier.

Burger King has instantly become the “poster child” for mass, criminal abuse of these devices.  And with their lack of consideration for the sanctity of people’s homes, we might assume that they’re already making jokes about trying to find ways to bill burgers to your credit card without your permission as well. For other dark forces watching these events, this idea could be far more than a joke.

While there are some humorous aspects to this situation — like the anti-Burger King changes made on Wikipedia in response to news of these upcoming ads — the overall situation really isn’t funny at all.

In fact, it was a direct and voluntary violation of law. It was accessing and using computers without permission. Whether or not anyone associated with this illicit stunt actually gets prosecuted is a different matter, but I urge the appropriate authorities to seriously explore this possibility, both for the action itself and relating to the precedent it created for future attacks.

And of course, don’t buy anything from those jerks at Burger King. Ever.

–Lauren–

You Can Make the New Google+ Work Better — If You’re Borg!

Recently, in Google+ and the Notifications Meltdown, I noted the abysmal user experience represented by the new Google+ unified desktop notifications panel — especially for users like me with many G+ followers and high numbers of notifications.

Since then, one observer mentioned to me that opening and closing the notifications panel seemed to load more notifications. I had noticed this myself earlier, but the technique appeared to be unreliable with erratic results, and with large numbers of notifications still being “orphaned” on the useless standalone G+ notifications page.

After a bunch more time wasted on digging into this, I now seem to have a methodology that will (for now at least … maybe) reliably permit users to see all G+ notifications on the desktop notifications panel, in a manner that permits interacting with them that is much less hassle than the standalone notifications page permits.

There’s just one catch. You pretty much have to be Borg-like in your precision to make this work. You can just call me “One of One” for the remainder of this post.

Keeping in mind that this is a “How-to” guide, not a “What the hell is going on?” guide, let’s begin your assimilation.

The new notifications panel will typically display up to around 10 G+ notification “tiles” when it’s opened by clicking on the red G+ notification circle. If you interact in any way with any specific tile, G+ now usually considers it as “read” and you frequently can’t see it again unless you go to the even more painful standalone notifications page.

Here’s my full recommended procedure. Wander from this path at your own risk.

Open the panel on your desktop by clicking the red circle with the notifications count inside. Click on the bottom-most tile. That notification will open. Interact with it as you might desire — add comments, delete spam, etc.

Now, assuming that there’s more than one notification, click the up-arrow at the top of the panel to proceed upward to the next notification. You can also go back downward with the down-arrow, but do NOT at this time touch the left-arrow at the top of the panel — you do not want to return to those tiles yet.

Continue clicking upward through the notifications using that up-arrow — the notifications will open as you proceed. This can be done quite quickly if you don’t need to add comments of your own or otherwise manage the thread — e.g., you can plow rapidly through +1 notifications.

When you reach the last (that is, the top) notification on the current panel, the up-arrow will no longer be available to click.

NOW you can use the left arrow at the top of the panel to return to the notification tiles view. When you’re back on that view, be sure that you under NO circumstances click the “X” on any of those tiles, and do NOT click on the “hamburger” icon (three horizontal lines) that removes all of the tiles. If you interact with either of those icons, whether at this stage or before working your way up through the notifications, you stand a high probability of creating “orphan” notifications that will collect forever on the standalone notifications page rather than ever being presented by the panel!

So now you’re sitting on the tile view. Click on an empty area of the G+ window OUTSIDE the panel. The panel should close.

Assuming that there are more notifications pending, click again on the red circle. The panel will reopen, and if you’ve been a good Borg you’ll see the panel repopulate with a new batch of notifications.

This exact process can be repeated (again, for the time being at least) until all of your notifications have been dealt with. If you’ve done this all precisely right, you’ll likely end up with zero unread notifications on the standalone notifications page.

That’s all there is to it! A user interface technique that any well-trained Borg can master in no time at all! But at least it’s making my G+ notifications management relatively manageable again.

Yep, resistance IS futile.

–Lauren–

Collecting Examples of YouTube Hate Speech Videos and Channels

I am collecting examples of hate speech videos on YouTube, and of YouTube channels that contain hate speech. Please use the form at:

https://vortex.com/yt-speech

to report examples of specific YouTube hate speech videos and/or the specific YouTube channels that have uploaded those videos. For YouTube channels that are predominantly filled with hate speech videos, the channel URL alone will suffice (rather than individual video URLs) and is of particular interest.

For the purposes of this study, “hate speech” is defined to be materials that a reasonable observer would feel are in violation of Google’s YouTube Community Standards Terms of Use here:

https://support.google.com/youtube/answer/2801939

For now, please only report materials that are in English, and that can be accessed publicly. All inputs on this form may be released publicly after verification as part of this project, with the exception of your (optional) name and email address, which will be kept private and will not released or used for any purposes beyond this study.

Thank you for participating in this study to better understand the nature and scope of hate speech on YouTube.

–Lauren–

“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today

Originally posted February 27 and 28, 2009:
Google’s “Failure to Communicate” vs. User Support
and
Google Ombudsman (Part II)

Greetings. There’s been a lot of buzz around the Net recently about Google Gmail outages, and this has brought back to the surface a longstanding concern about the public’s ability (or lack thereof) to communicate effectively with Google itself about problems and issues with Google services.

I’ll note right here that Google usually does provide a high level of customer support for users of their paid services. And I would assert that there’s nothing wrong with Google providing differing support levels to paying customers vs. users of their many free services.

But without a doubt, far and away, the biggest Google-related issue that people bring to me is a perceived inability to effectively communicate with Google when they have problems with free Google services — which people do now depend on in many ways, of course. These problems can range from minor to quite serious, sometimes with significant ongoing impacts, and the usual complaint is that they get no response from submissions to reporting forms or e-mailed concerns.

On numerous occasions, when people bring particular Google problems to my attention, I have passed along (when I deemed it appropriate) some of these specific problems to my own contacts at Google, and they’ve always been dealt with promptly from that point forward. But this procedure can’t help everyone with such Google-related issues, of course.

I have long advocated (both privately to Google and publicly) that Google establish some sort of public Ombudsman (likely a relatively small team) devoted specifically to help interface with the public regarding user problems — a role that requires a skillful combination of technical ability, public relations, and “triage” skills. Most large firms that interact continually with the public have teams of this sort in one form or another, often under the label “Ombudsman” (or sometimes “Office of the President”).

The unofficial response I’ve gotten from Google regarding this concept has been an expression of understanding but a definite concern about how such an effort would scale given Google’s user base.

I would never claim that doing this properly is a trivial task — far from it. But given both the horizontal and vertical scope of Google services, and the extent to which vast numbers of persons now depend on these services in their everyday personal and business lives, I would again urge Google to consider moving forward along these lines.

–Lauren–

 – – –

Greetings. In Google’s “Failure to Communicate” vs. User Support, I renewed my long-standing call for an Ombudsman “team” or equivalent communications mechanism for Google.

Subsequent reactions suggest that some readers may not be fully familiar with the Ombudsman concept, at least in the way that I use the term.

An Ombudsman is not the same thing as “customer support” per se. I am not advocating a vast new Google customer service apparatus for users of their free services. Ombudsmen (Ombudswomen? Let’s skip the politically correct linguistics for now …) aren’t who you go to when search results are slow or you can’t log in to Gmail for two hours. These sorts of generally purely technical issues are the vast majority of the time suitable for handling within the normal context of existing online reporting forms and the like. (I inadvertently may have caused some confusion on this point by introducing my previous piece with a mention of Gmail problems — but that was only meant in the sense that those problems triggered broader discussions, not a specific example of an issue appropriate for escalation to an Ombudsman.)

But there’s a whole different class of largely non-technical (or more accurately, mixed-modality) issues where Google users appear to routinely feel frustrated and impotent to deal with what they feel are very disturbing situations.

Many of these relate to perceived defamations, demeaning falsehoods, systemic attacks, and other similar concerns that some persons feel are present in various Google service data (search results, Google Groups postings, Google-hosted blog postings, YouTube, and so on).

By the time some of these people write to me, they’re apparently in tears over the situations, wondering if they should spend their paltry savings on lawyers, and generally very distraught. Their biggest immediate complaints? They don’t know who to contact at Google, or their attempts at contact via online forms and e-mail have yielded nothing but automatic replies (if that).

And herein resides the crux of the matter. I am a very public advocate of open information, and a strong opponent of censorship. I won’t litter this posting with all of the relevant links. I have however expressed concerns about the tendency of false information to reside forever in search results without mechanisms for counterbalancing arguments to be seen. In 2007 I discussed this in Search Engine Dispute Notifications: Request For Comments and subsequent postings. This is an exceedingly complex topic, with no simple solutions.

In general, my experience has been that many or most of the concerns that people bring forth in these regards are, all aspects of the situation considered fairly, not necessarily suitable for the kinds of relief that the persons involved are seeking. That is, the level of harm claimed often seems insufficient, vs. free speech and the associated rights of other parties.

However, there are dramatic, and not terribly infrequent exceptions that appear significantly egregious and serious. And when these folks can’t get a substantive reply from Google (and can’t afford a lawyer to go after the parties who actually have posted or otherwise control the information that Google is indexing or hosting) these aggrieved persons tend to be up you-know-what creek.

If you have a DMCA concern, Google will normally react to it promptly. But when the DMCA is not involved, trying to get a real response from Google about the sorts of potentially serious concerns discussed above — unless you have contacts that most people don’t have — can often seem impossible.

Google generally takes the position — a position that I basically support — that since they don’t create most content, the responsibility for the content is with the actual creator, the hosting Web sites, and so on. But Google makes their living by providing global access to those materials, and cannot be reasonably viewed as being wholly separated from associated impacts and concerns.

At the very least, even if requests for deletions, alterations, or other relief are unconvincing or rejected for any number of quite valid reasons, the persons who bring forth these concerns should not be effectively ignored. They deserve to at least get a substantive response, some sort of hearing, more than a form-letter automated reply about why their particular plea is being rejected. This principle remains true irrespective of the ultimate merits or disposition of the particular case.

And this is where the role of a Google Ombudsman could be so important — not only in terms of appropriately responding to these sorts of cases, but also to help head off the possibility of blowback via draconian regulatory or legislative actions that might cut deeply into Google’s (and their competitors) business models — a nightmare scenario that I for one don’t want to see occur.

But I do fear that unless Google moves assertively toward providing better communications channels with their users for significant issues — beyond form responses and postings in the official Google blogs, there are forces that would just love to see Google seriously damaged who will find ways to leverage these sorts of issues toward that end — evidence of this sort of positioning by some well-heeled Google haters is already visible.

Ombudsmen are all about communication. For any large firm that is constantly dealing with the public, especially one operating on the scope of Google, it’s almost impossible to have too much communication when it comes to important problems and related issues. On the other hand, too little communications, or the sense that concerned persons are being ignored, can be a penny-wise but pound-foolish course with negative consequences that could have been — even if not easily avoided– at least avoided with a degree of serious effort.

–Lauren–

The YouTube Racists Fight Back!

Somewhat earlier today I received one of those “Hey Lauren, you gotta look at this on YouTube!” emails. Prior to my recently writing What Google Needs to Do About Hate Speech, such a message was as likely to point at a particularly cute cat video or a lost episode of some 60s television series as anything else. Since that posting, however, these alerts are far more likely to direct me toward much more controversial materials.

Such was the case today. Because the YouTube racists, antisemites, and their various assorted lowlife minions are at war. They’re at war with YouTube, they’re at war with the Wall Street Journal. They’re ranting and raving and chalking up view counts on their YouTube live streams and uploads today that ordinary YouTube users would be thankful to accumulate over a number of years.

After spending some time this afternoon lifting up rotting logs to peer at the maggots infesting the seamy side of YouTube where these folks reside, here’s what’s apparently going on, as best as I can understand it right now.

The sordid gang of misfits and losers who create and support the worst of YouTube content — everybody from vile PewDiePie supporters to hardcore Nazis, are angry. They’re angry that anyone would dare to threaten the YouTube monetization streams that help support their continuing rivers of hate speech. Any moves by Google or outside entities that appear to disrupt their income stream, they characterize as efforts to “destroy the YouTube platform.”

Today’s ongoing tirade appears to have been triggered by claims that the Wall Street Journal “faked” the juxtaposition of specific major brand ads with racist videos, as part of the ongoing controversies regarding YouTube advertiser controls. It seems that the creators of these videos are claiming that the videos in question were not being monetized during the period under discussion, or otherwise couldn’t have appeared in the manner claimed by the WSJ.

This gets into a maze of twisty little passages very quickly, because when you start digging down into these ranting videos today, you quickly see how they are intertwined with gamer subcultures, right-wing “fake news” claims, pro-Trump propagandists, and other dark cults — as if the outright racism and antisemitism weren’t enough.

And this is where the true irony breaks through like a flashing neon sign. These sickos aren’t at all apologetic for their hate speech videos on YouTube, they’re simply upset when Google isn’t helping to fund them.

I’ve been very clear about this. I strongly feel that these videos should not be on YouTube at all, whether monetized or not.

For example, one of the videos being discussed today in this context involves the song “Alabama Nig—.” If you fill in the dashes and search for the result on YouTube, you’ll get many thousands of hits, all of them racist, none of which should be on YouTube in the first place.

Which all suggests that the arguments about major company ads on YouTube hate speech videos, and more broadly the issues of YouTube hate speech monetization, are indeed really just digging around the edges of the problem.

Hate speech has no place on YouTube. Period. Google’s Terms of Service for YouTube explicitly forbid racial, religious, and other forms of this garbage.

The sooner that Google seriously enforces their own YouTube terms, the sooner that we can start cleaning out this hateful rot. We’ve permitted this disease to grow for years on the Internet thanks to our “anything goes” attitude, contributing to a horrific rise in hate throughout our country, reaching all the way to the current occupant of the Oval Office and his cronies.

This must be the beginning of the end for hate speech on Youtube.

–Lauren–

My Brief Radio Discussion of the GOP’s Horrendous Internet Privacy Invasion Law

An important issue that I’ve frequently discussed here and in other venues is the manner in which Internet and other media “filter bubbles” tend to cause us to only expose ourselves to information that we already agree with — whether it’s accurate or not.

That’s one reason why I value my continuing frequent invitations to discuss technology and tech policy topics on the extremely popular late night “Coast to Coast AM” national radio show. Talk radio audiences tend to be very conservative, and the willingness  of the show to repeatedly share their air with someone like me (who doesn’t fit the typical talk show mold and who can offer a contrasting point of view) is both notable and praiseworthy.

George Noory is in my opinion the best host on radio — he makes every interview a pleasure for his guests. And while the show has been known primarily over the years for discussions of — shall we say — “speculative” topics, it also has become an important venue for serious scientists and technologists to discuss issues of importance and interest (see: Coast to Coast AM Is No Wack Job).

Near the top of the show last night I chatted with George for a few minutes about the horribly privacy-invasive new GOP legislation that permits ISPs to sell customers’ private information (including web browsing history and much more) without prior consent. This morning I’ve been receiving requests for copies of that interview, so (with the permission of the show for posting short excerpts) it’s provided below.

Here is an audio clip of the interview for download. It’s under four minutes long. Or you can play it here:

[/audi

As I told George, I’m angry about this incredibly privacy-invasive legislation . If you are too, I urge you to inform the GOP politicos who pushed this nightmare law — to borrow a phrase from the 1976 film “Network” — that you’re mad as hell and you’re not going to take this anymore!

–Lauren–

Google+ and the Notifications Meltdown

I’ve been getting emails recently from correspondents complaining that I have not responded to their comments/postings on Google+. I’ve just figured out why.

The new (Google unified) Google+ desktop notification panel is losing G+ notifications left and right. For a while I thought that all of the extra notifications I was seeing when I checked on mobile occasionally were dupes — but it turns out that most of them are notifications that were never presented to me on desktop, in vast numbers.

Right now I can find (on the essentially unusable G+ desktop standalone notifications page, which requires manually clicking to a new page for each post!) about 30 recent G+ notifications that were never presented to me in the desktop notification panel. I’m not even sure how to deal with them now in a practical manner.

This is unacceptable — you have one job to do, notifications panel, and that’s to accurately show me my damned notifications!

Also, a high percentage of the time when I click on actions in the new desktop notification panel pop-up boxes (e.g. to reply), the panel blows away and I’m thrown to a new G+ page tab.

Does anyone at G bother to stress test this stuff any more in the context of users with many followers (I have nearly 400K) who get lots of notifications? Apparently not.

Another new Google user interface triumph of form over function!

–Lauren–

How YouTube’s User Interface Helps Perpetuate Hate Speech

UPDATE (6 May 2017): The “Report” Option Returns (at Least on YouTube Red)

UPDATE (18 June 2017): Looks like the top level “Report” option has vanished again.

– – –

Computer User Interface (UI) design is both an art and a science, and can have effects on users that go far beyond the interfaces themselves. As I’ve discussed previously, e.g. in The New Google Voice Is Another Slap in the Face of Google’s Users — and Their Eyes, user interfaces can unintentionally act as a form of discrimination against older users or other users with special needs.

But another user interface question arises in conjunction with the current debate about hate speech on Google’s YouTube (for background, please see What Google Needs to Do About YouTube Hate Speech and How Google’s YouTube Spreads Hate).

Specifically, can user interface design unintentionally help to spread and perpetuate hate speech? The answer may be an extremely disconcerting affirmative.

A key reason why I suspect that this is indeed the case, is the large numbers of YouTube users who have told me that they didn’t even realize that they had the ability to report hate speech to YouTube/Google. And when I’ve suggested that they do so, they often reply that they don’t see any obvious way to make such a report.

Over the years it’s become more and more popular to “hide” various UI elements in menus and/or behind increasingly obscure symbols and icons. And one key problem with this approach is obvious when you think about it: If a user doesn’t even know that an option exists, can we really expect them to play “UI scavenger hunt” in an attempt to find such an option? Even more to the point, what if it’s an option that you really need to see in order to even realize that the possibility exists — for example, of reporting a YouTube hate speech video or channel?

While YouTube suffers from this problem today, that wasn’t always the case. Here’s an old YouTube “watch page” desktop UI from years ago:

An Old YouTube User Interface

Not only is there a flag icon present on the main interface (rather than having the option buried in a “More” menu (and/or under generic vertical dots or horizontal lines), but the word “Flag” is even present on the main interface to serve as a direct signal to users that flagging videos is indeed an available option!

On the current YouTube desktop UI, you have to know to go digging under a “More” menu to find a similar “Report” option. And if you didn’t know that a Report option even existed, why would you necessarily go searching around for it in the first place? The only other YouTube page location where a user might consider reporting a hate speech video is through the small generic “Feedback” link at the very bottom of the watch page — and that can be way, way down there if the video has a lot of comments.

To be effective against hate speech, a flagging/reporting option needs to be present in an obvious location on the main UI, where users will see it and know that it exists. If it’s buried or hidden in any manner, vast numbers of users won’t even realize that they have the power to report hate speech videos to Google at all (the disappointing degree to which Google actually enforces their hate speech prohibitions in their Terms of Service, I’ve discussed in the posts linked earlier in this text).

You don’t need to be a UI expert to suspect one reason why Google over time has de-emphasized obvious flag/report links on the main interface, instead relegating them to a generic “More” menu. The easier the option is to see, the more people will tend to use it, both appropriately and inappropriately — and really dealing with those abuse reports in a serious manner can be expensive in terms of code and employees.

But that’s no longer an acceptable excuse — if it ever was. Google is losing major advertisers in droves, who are no longer willing to have their ads appear next to hate speech videos that shouldn’t even be monetized, and in many cases shouldn’t even be available on YouTube at all under the existing YouTube/Google Terms of Service.

For the sake of its users and of the company itself, Google must get a handle on this situation as quickly as possible. Making sure that users are actually encouraged to report hate speech and other inappropriate videos, and that Google treats those reports appropriately and with a no-nonsense application of their own Terms of Service, are absolutely paramount.

–Lauren–

What Google Needs to Do About YouTube Hate Speech

In the four days since I wrote How Google’s YouTube Spreads Hate, where I discussed both how much I enjoyed and respected YouTube, and how unacceptable their handling of hate speech has become, a boycott by advertisers of YouTube and Google ad networks has been spreading rapidly, with some of the biggest advertisers on the planet pulling their ads over concerns about being associated with videos containing hate speech, extremist, or related content.

It’s turned into a big news story around the globe, and has certainly gotten Google’s attention.

Google has announced some changes and apparently more are in the pipeline, so far relating mostly to making it easier for advertisers to avoid having their ads appear with those sorts of content.

But let’s be very clear about this. Most of that content, much of which is on long-established YouTube channels sometimes with vast numbers of views, shouldn’t be permitted to monetize at all. And in many cases, shouldn’t be permitted on YouTube at all (by the way, it’s a common ploy for YT uploaders to ask for support via third-party sites as a mechanism to evade YT monetization disablement).

The YouTube page regarding hate speech is utterly explicit:

We encourage free speech and try to defend your right to express unpopular points of view, but we don’t permit hate speech.

Hate speech refers to content that promotes violence or hatred against individuals or groups based on certain attributes, such as:

race or ethnic origin
religion
disability
gender
age
veteran status
sexual orientation/gender identity

There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but not okay to post malicious hateful comments about a group of people solely based on their ethnicity.

Seems pretty clear. But in fact, YouTube is awash with racist, antisemitic, and a vast array of other videos that without question violate these terms, many on established, cross-linked YouTube channels containing nothing but such materials.

How easy is it to stumble into such garbage?

Well, for me here in the USA, the top organic (non-ad) YouTube search result for “blacks” is a video showing a car being wrecked with the title: “How Savage Are Blacks In America & Why Is Everyone Afraid To Discuss It?” — including the description “ban niggaz not guns” — and also featuring a plea to donate to a racist external site.

This video has been on YouTube for over a year and has accumulated over 1.5 million views. Hardly hiding.

While it can certainly can be legitimately argued that there are many gray areas when it comes to speech, on YouTube there are seemingly endless lists of videos that are trivially located and clearly racist, antisemitic, or in violation of YouTube hate speech terms in other ways.

And YouTube helps you find even more of them! On the right-hand suggestion panel right now for the video I mentioned above, there’s a whole list of additional racist videos, including titles like: “Why Are So Many Of U Broke, Black, B!tches Begging If They Are So Strong & Independent?” — and much worse.

Google’s proper course is clear. They must strongly enforce their own Terms of Service. It’s not enough to provide control over ads, or even ending those ads entirely. Videos and channels that are in obvious violation of the YT TOS must be removed.

We have crossed the Rubicon in terms of the Internet’s impact on society, and laissez-faire attitudes toward hate speech content are now intolerable. The world is becoming saturated in escalating hate speech and related attacks, and even tacit acceptance of these horrors — whether spread on YouTube or by the Trump White House — must be roundly and soundly condemned.

Google is a great company with great people. Now they need to grasp the nettle and do the right thing.

–Lauren–

How Google’s YouTube Spreads Hate

I am one of YouTube’s biggest fans. Seriously. It’s painful for me to imagine a world now without YouTube, without the ability to both purposely find and serendipitously discover all manner of contemporary and historical video gems. I subscribe to YouTube Red because I want to help support great YT creators (it’s an excellent value, by the way).

YouTube is perhaps the quintessential example of a nexus where virtually the entire gamut of Internet policy issues meet and mix — content creation, copyrights, fair use, government censorship, and a vast number more are in play.

The scale and technology of YouTube is nothing short of staggering, and the work required to keep it all running — in terms of both infrastructure and evolving policies, is immense. When I was consulting to Google several years ago, I saw much of this firsthand, as well as having the opportunity to meet many of the excellent people behind the scenes.

Does YouTube have problems? Of course. It would be impossible for an operation of such scope to exist without problems. What we really care about in the long run is how those problems are dealt with.

There is a continual tension between entities claiming copyrights on material and YouTube uploaders. I’ve discussed this in considerable detail in the past, so I won’t get into it again here, other than to note that it’s very easy for relatively minor claimed violations (whether actually accurate or not) to result in ordinary YouTube users having their YouTube accounts forcibly closed, without effective recourse in many cases. And while YouTube has indeed improved their appeal mechanisms in this regard over time, they still have a long way to go in terms of overall fairness.

But a far more serious problem area with YouTube has been in the news repeatedly lately — the extent to which hate speech has permeated the YouTube ecosystem, even though hate speech on YouTube is explicitly banned by Google in the terms of use on this YouTube help page.

Before proceeding, let’s set down some hopefully useful parameters to help explain what I’m talking about here.

One issue that we need to clarify at the outset. The First Amendment to the United States Constitution does not require that YouTube or any other business provide a platform for the dissemination, monetization, or spread of any particular form of speech. The First Amendment applies only to governmental restrictions on speech, which are the true meaning of the term censorship. This is why concepts such as the horrific “Right To Be Forgotten” are utterly unacceptable, as they impose governmentally enforced third-party censorship onto search results.

It’s also often suggested that it’s impossible to really identify hate speech because — some observers argue — everyone’s idea of hate speech is different. Yet from the standpoint of civilized society, we can see that this argument is largely a subterfuge.

For while there are indeed gray areas of speech where even attempting to assign such a label would be foolhardy, there are also areas of discourse where not assigning the hate speech label would require inane and utterly unjustifiable contortions of reality.

Videos from terrorist groups explicitly promoting violence are an obvious example. These are universally viewed as hate speech by all civilized people, and to their credit the major platforms like YouTube, Facebook, et al. have been increasingly leveraging advanced technology to block them, even at the enormous “whack-a-mole” scales at which they’re uploaded.

But now we move on to other varieties of hate speech that have contaminated YouTube and other firms. And while they’re not usually as explicitly violent as terrorist videos, they’re likely even more destructive to society in the long run, with their pervasive nature now even penetrating to the depths of the White House.

Before the rise of video and social media platforms on the Internet, we all knew that vile racists and antisemites existed, but without effective means to organize they tended to be restricted to their caves in Idaho or their Klan clubhouses in the Deep South. With only mimeograph and copy machines available to perpetuate their postal-distributed raving-infested newsletters, their influence was mercifully limited.

The Internet changed all that, by creating wholly new communications channels that permitted these depraved personalities to coordinate and disseminate in ways that are orders of magnitude more effective, and so vastly increasing the dangers that they represent to decent human beings.

Books could be written about the entire scope of this contamination, but this post is about YouTube’s role, so let’s return to that now.

In recent weeks the global media spotlight has repeatedly shined on Google’s direct financial involvement with established hate speech channels on YouTube.

First came the PewDiePie controversy. As YouTube’s most-subscribed star, his continuing dabbling in antisemitic videos — which he insists are just “jokes” even as his Hitler-worship continues — exposed YouTube’s intertwining with such behavior to an extent that Google found itself in a significant public relations mess. This forced Google to take some limited enforcement actions against his YouTube channel. Yet the channel is still up on YouTube. And still monetizing.

Google is in something of a bind here. Having created this jerk, who now represents a significant income stream to himself and the company, it would be difficult to publicly admit that his style of hate is still exceedingly dangerous, as it helps to normalize such sickening concepts. This is true even if we accept for the sake of the argument that he actually means it in a purely “joking” way (I don’t personally believe that this is actually the case, however). For historical precedent, one need only look at how the antisemitic “jokes” in 1930s Germany became a springboard to global horror.

But let’s face it, Google really doesn’t want to give up that income stream by completely demonetizing PewDiePie or removing his channels completely, nor do they want to trigger his army of obscene and juvenile moronic trolls and a possible backlash against YouTube or Google more broadly.

Yet from an ethical standpoint these are precisely the sorts of actions that Google should be taking, since — as I mentioned above — “ordinary” YouTube users routinely can lose their monetization privileges — or be thrown off of YouTube completely — for even relatively minor accused violations of the YouTube or Google Terms of Service.

There’s worse of course. If we term PewDiePie’s trash as relatively “soft” hate speech, we then must look to the even more serious hate speech that also consumes significant portions of YouTube.

I’m not going to give any of these fiends any “link juice” by naming them here. But it’s trivial to find nearly limitless arrays of horrible hate speech videos on YouTube under the names of both major and minor figures in the historical and contemporary racist/antisemitic/alt-right movements.

A truly disturbing aspect is that once you find your way into this depraved area of YouTube, you discover that many of these videos are fully monetized, meaning that Google is actually helping to fund this evil — and is profiting from it.

Perhaps equally awful, if you hit one of these videos’ watch pages, YouTube’s highly capable suggestion engine will offer you a continuous recommended stream of similar hate videos over on the right-hand side of the page — even helpfully surfacing additional hate speech channels for your enjoyment. I assume that if you watched enough of these, the suggestion panels on the YouTube home page would also feature these videos for you.

Google’s involvement with such YouTube channels became significant news over the last couple of weeks, as major entities in the United Kingdom angrily pulled their advertising after finding it featured on the channels of these depraved hatemongers. Google quickly announced that they’d provide advertisers with more controls to help avoid this in the future, but this implicitly suggests that Google doesn’t plan actions against the channels themselves, and Google’s “we don’t always get it right” excuse is wearing very, very thin given the seriousness of the situation.

Even if we completely inappropriately consider such hate speech to be under the umbrella of acceptable speech, what we see on YouTube today in this context is not merely providing a “simple” platform for hate speech — it’s providing financial resources for hate speech organizations, and directly helping to spread their messages of hate.

I explicitly assume that this has not been Google’s intention per se. Google has tried to take a “hands off” attitude toward “judging” YouTube videos as much as possible. But the massive rise in hate-based speech and attacks around the world, including (at least tacitly) to the highest levels of the U.S. federal government under the Trump administration, are clear and decisive signals that this is no longer a viable course for an ethical and great company like Google.

It’s time for Google to extricate YouTube from its role as a partner in hate. That this won’t come without significant pain and costs is a given.

But it’s absolutely the correct path for Google to take — and we expect no less from Google.

–Lauren–

Google and Older Users

Alphabet/Google needs at least one employee dedicated to vetting their products on a continuing basis for usability by older users — an important and rapidly growing demographic of users who are increasingly dependent on Google services in their daily lives.

I’m not talking here about accessibility in general, I’m talking about someone whose job is specifically to make sure that Google’s services don’t leave older users behind due to user interface and/or other associated issues. Otherwise, Google is essentially behaving in a discriminatory manner, and the last thing that I or they should want to see is the government stepping in (via the ADA or other routes) to mandate changes.

–Lauren–

“Google Experiences” Submission Page Now Available

Recently in Please Tell Me Your Google Experiences For “Google 2017” Report, I solicited experiences with Google — positive, negative, neutral, or whatever — for my upcoming “Google 2017” white paper report.

The response level has been very high and has led me to create a shared, public Google Doc to help organize such submissions.

Please visit the Google Experiences Suggestions Page to access that document, through which you may submit suggested text and/or other information. You do not need to be logged into a Google account to do this.

Thanks again very much for your participation in this effort!

–Lauren–

Simple Solutions to “Smart TVs” as CIA Spies

I’m being bombarded with queries about Samsung “Smart TVs” being used as bugs by the CIA, as discussed in the new WikiLeaks data dump.

I’m not in a position to write up anything lengthy about this right now, but there is a simple solution to the entire “smart TV as bug” category of concerns — don’t buy those TVs, and if you have one, don’t connect it to the Internet directly.

Don’t associate it with your Wi-Fi network — don’t plug it into your Ethernet.

Buy a Chromecast or Roku or similar dongle that will provide your Internet programming connectivity via HDMI to that television — these dongles don’t include microphones and are dirt cheap compared to the price of the TV itself.

In general, so-called smart TVs are not a good buy even when they’re not acting as bugs.

Now, seriously paranoid readers might ask “Well, what if the spooks are subverting both my smart TV and my external dongle? Couldn’t they somehow route the audio from the TV microphone back out to the Internet through hacked firmware in the dongles?”

The answer is theoretically yes, but it’s a significantly tougher lift for a number of technical reasons. The solution though even for that scenario is simple — kill the power to the dongle when you’re not using it.

Unplug it from the TV USB jack if you’re powering it that way (I mean, if you’re paranoid, you might consider the possibility that the hacked TV firmware is still supplying power to the dongle even when it’s supposed to be off, and that the dongle has been hacked to not light its power LED in that situation, eh?)

But if you’re powering the dongle from a wall adapter, and you unplug that, you’ve pretty much ended that ballgame.

–Lauren–

Google’s New “YouTube TV” Is a Gift to Donald Trump

As if it wasn’t bad enough that so many high-ranking Google search results were hijacked by criminals monetizing false news stories toward getting Donald Trump elected, it appears that (for the moment at least), Google’s new “YouTube TV” offering is a gift package for serial lying sociopath Donald Trump and his vile supporters.

YouTube Live is Google’s newly announced attempt to push cable “cord cutting” — that is, encouraging people to drop their conventional cable or satellite TV subscriptions, and switch to viewing Internet-delivered streams.

The YouTube Live offering seems fairly conventional at first glance and Google has tossed in useful stuff like multiple users and free time-shifting/DVR capabilities.

But a glaring omission from their channel lineup makes YouTube Live a massive prize package for Donald Trump and his fascist agenda — FOX “News” is included in the lineup, but CNN is nowhere to be found. Go ahead, try and find it. I sure can’t.

It appears that Google is hoping that viewers will accept MSNBC as a substitute for CNN — but that’s ridiculous in the extreme. Not including CNN is giving FOX “News” an enormous boost, and those right-wing News Corp. bastards have already done enough damage to this country without Google giving FOX and Trump this additional big wet kiss squarely on their rotting lips.

No doubt Google will say that they couldn’t reach a licensing agreement with CNN/Time Warner and golly gee we hope to add them onto the lineup soon.

To hell with that. How long will it be before FOX and Trump are ranting claims that Google chose FOX “News” because Google doesn’t trust CNN? Launching this service including FOX “News” but not including CNN is the height of irresponsibility, especially in today’s political environment.

Shame on you Google. Shame on you.

–Lauren–

Meet the Guys: The Jerks of Computer Science

UPDATE (August 9, 2017):  Here’s My Own Damned “Google Manifesto”

UPDATE (August 7, 2017):  Audio from My Radio Discussion About the Leaked Google “Diversity” Manifesto Controversy

– – –

Originally posted July 16, 2013.

A perennial question in Computer Science has nothing directly to do with code or algorithms, and everything to do with people. To wit: Why don’t more women choose CS as a career path?

As a guy who has spent his entire professional career in CS and related policy arenas, this skewing has been obvious to me pretty much since day one.

It’s not restricted to educational institutions and the workplace, it’s also on display at trade shows, technical conferences, and even on social networking sites of all stripes.

And despite the efforts of major firms to draw more women into this field with some relatively limited successes, the overall problem still persists.

All sorts of theories have been postulated for why women tend to avoid CS and the related computer technology fields, ranging from “different nurturing patterns” to inept school guidance counselors.

But I suspect there’s an even more basic reason, that women tend to detect quickly and decisively.

The men of computer science and the computer industry are misogynous jerks.

Not all of them of course. Likely not even the majority.

But enough to thoroughly poison the well.

This goes far beyond guys crudely hitting on women at conferences, or the continuing presence of humiliating “booth babes” at trade shows.

The depth to which this pervades has been especially on painful display on the Web over the last couple of days, relating to a very important operating system technical discussion list.

Since I don’t want this to be about individuals, we’ll call the person at the focus of this list by the label “Q” — after the supercilious, intelligent, arrogant, omnipotent character from the “Star Trek” universe. Not evil per se — in fact capable of great constructive work — but most folks who come in contact with him are unwilling to risk the wrath of such a powerful entity. Indeed, an interesting character this Q.

Back here in what we assume is the real world, the current controversy was triggered when a female member of that technical discussion list publicly criticized “Q” and what we’ll politely call his “boorish” statements on the list — causing at least one observer to note that it was the first time they’d seen anyone stand up to Q that way in 20 years. This woman — by the way — is the formal representative to the list in question from an extremely important and major firm whose technology is at the heart of most personal computers in use today.

The particular examples she cited were by no means the most illustrative available — aficionados of the list in question realize she was showing admirable diplomatic tact.

But while reactions to her statements in the associated list thread itself can certainly be described as interesting, many of the reactions that have appeared externally in social media can only be described as vomit inducing.

I can’t even repeat many of them here, but just a sampling I’ve seen and/or directly received:

– “Nobody told her she had to work with Linux, get off the list!”
– “What is she, a slave? She doesn’t have to be there!”
– “Q is a god! He’s done so much good he can say or do anything else he wants, he can walk across your burned corpses!”
– “People should be able to say anything they want any way they want. If you can’t take it, go somewhere else.”
– “Bring her over to my house and I’ll show her what bad behavior is really about!”
– “Somebody is always going to be offended by everything, so there’s no point to even trying to be polite.”
– “She’s just having PMS and snapped!”
– “Hey, it’s not so bad on the list, it’s just good ol’ boys playing South Park! We don’t want political correctness here. Tell her to go – – – – herself, or ask me over and I’ll do it for her!”

And a wide variety of other specifically crude, sexist, and toilet humor remarks of all sorts, plus much worse.

It was getting so bad that I had to shut down comments on two discussion threads last night before going to bed to avoid their turning into rancid cesspools in my absence — and I wasn’t the only one who had to take that action.

One might argue that all this isn’t unique to computer science and the broader computer industry, and you’d be correct. This kind of “boys will be boys” sexism pervades our culture and in fact has driven many women into refusing to even identify as female in social media or discussion lists at all.

But the “it’s not really important, and everybody’s doing it anyway!” excuse is utterly bogus.

While we may not be able to change these attitudes in the culture at large, we can at least take steps to clean up our own house, to try bring a basic level of civility to our own work in these regards.

But first we need to admit that the status quo is indeed unacceptable, and many in our community’s “good ol’ boys club” are currently refusing even to go that far.

The technical and policy issues we’re dealing with are far too crucial to permit them to be distorted by juvenile, sexist, and loutish behavior that discourages maximum practicable inclusion and participation.

And rather than acting as tacit examples of bullying that help feed even worse abuses, leaders in our technical community should be taking the responsibility to be examples in public — if not of exemplary behavior — at least of basic politeness.

If people want to be jerks in their private lives, that’s up to them. But keep your bad behavior and sexist crap out of our work.

And that goes for you, me, Q, and everyone else as well.

–Lauren–

Please Tell Me Your Google Experiences For “Google 2017” Report

Executive Summary: Please tell me your Google Experiences for my upcoming “Google 2017″ report, via email to:

google@vortex.com

I believe that it’s obvious to pretty much everyone that we’ve now entered a new era of major Internet-related companies directly and indirectly impacting political processes and other aspects of our lives in ways that — frankly, to say the least — were not widely anticipated by most observers. So understanding where things stand these days with these firms is paramount, in terms of their own operations, and their impacts on their users and the world in general.

For many years the most common category of questions and comments that I receive relate one way or another to Google (while I have consulted to Google in the past, I am not currently doing so). So I’ve now begun work on what I’m tentatively calling “Google 2017” — a report (or “white paper” if you prefer) discussing the perceived overall state of Google (and its parent corporation Alphabet, Inc.) in relation to the sorts of issues that I noted above and other relevant related topics.

As part of this effort, I’d very much appreciate your emailing me your own noteworthy experiences with Google (and Alphabet). Good — bad — exemplary — abysmal — confused — resolved — pending — fantastic — or otherwise rising to the level that you feel could usefully contribute to a better understanding of Google and Alphabet overall.

Whether involving specific Google services (including everything from Search to Gmail to YouTube and beyond), accounts, privacy, security, interactions, legal or copyright issues — essentially anything positive, negative, or neutral that you are free to impart to me, that you believe might be of interest.

I would like to keep this report focused on relatively recent experiences and observations, so events that took place years ago that aren’t any longer particularly relevant are frankly of lesser use to me right now.

Your identity will be considered confidential, and any information that you send to me will also be considered confidential in the details — unless you specifically indicate otherwise. That is, I will use your information toward the effort’s reported aggregate analysis, and any of your specific examples or other data that you provide — that I might include in the report as illustrative examples — will be carefully anonymized, unless you give me permission to do otherwise. If you don’t want me to use your examples at all even anonymized, please let me know and that will be respected of course.

Please send anything meeting the criteria above that you feel comfortable sharing with me to:

google@vortex.com

I’ll keep you informed of my progress. Thanks very much!

Be seeing you.

–Lauren–

Don’t (For Now) Use Google’s New “Perspective” Comment Filtering Tool

I must be brief today, so I’ll keep this relatively short and get into details in another post. Google has announced (with considerable fanfare) public access to their new “Perspective” comment filtering system API, which uses Google’s machine learning/AI system to determine which comments on a site shouldn’t be displayed due to perceived high spam/toxicity scores. It’s a fascinating effort. And if you run a website that supports comments, I urge you not to put this Google service into production, at least for now.

The bottom line is that I view Google’s spam detection systems as currently too prone to false positives — thereby enabling a form of algorithm-driven “censorship” (for lack of a better word in this specific context) — especially by “lazy” sites that might accept Google’s determinations of comment scoring as gospel.

In fact, Google’s track record in this context remains problematic.

You can see this even from the examples that Google provides, where it’s obvious that any given human might easily disagree with Google’s machine-driven comment ranking decisions.

And as someone who deals with significant numbers of comments filtered by Google every day — I have nearly 400K followers on Google+ — I can tell you with considerable confidence that the problem isn’t “spam” comments that are being missed, it’s completely legitimate non-spam, nontoxic comments that are inappropriately marked as spam and hidden by Google.

Every day, I plow through lots of these (Google makes them relatively difficult to find and see), so that I can “resurface” completely reasonable comments from good people who have been marked as toxic spammers by Google spam detection false positives.

This is a bad situation, and widespread use of “Perspective” at this stage of its development would likely spread this problem around the world.

For in fact, much worse than letting a spam or toxic comment through, is the AI-based muzzling of a comment and commenter who was completely innocent and falsely condemned by the machine, where a human would not have done so.

“Vanishing” of innocent, legit comments through overaggressive algorithms can lead to misunderstandings, confusion, and a general lack of trust in AI systems — and this kind of trust failure can be dangerous for users and the industry, since AI’s potential for greatness toward improving our world is indeed very real.

I’ll have more to say about this later, but for now, while you should of course feel free to experiment with the Google Perspective API, I urge you not to deploy it to any running production systems at this time.

Be seeing you.

–Lauren–