Don’t Blame YouTube and Facebook for Hate Speech Horrors

Views: 108

Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto. 

While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.

Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.

Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.

In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.

Meanwhile, other demands being bandied about are equally specious.

Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.

Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications. 

Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.

Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.

But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem. 

Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.

In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).

The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?

The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites. 

They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.

You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.

The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.

The battle lines are drawn. 

–Lauren–

As Google’s YouTube Battles Evil, YouTube Creators Are at a Crossroads

Views: 955

UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube

 – – –

For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!

The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.

YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.

And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.

YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.

In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.

I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.

I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos. 

This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.

YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.

My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.

In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.

Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.

There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos. 

It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.

I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.

To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.

The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.

While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.

This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.

A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible. 

Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict. 

Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.

There are a variety of other possible approaches as well.

It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.

–Lauren–

Another Positive Move by YouTube: No More General “Conspiracy Theory” Suggestions

Views: 527

A few weeks ago, I noted the very welcome news that Google’s YouTube is cracking down on the presence of dangerous prank and dare videos, rightly categorizing them as potentially harmful content no longer permitted on the platform. Excellent.

Even more recently, YouTube announced a new policy regarding the category of misleading and clearly false “conspiracy theory” videos that would sometimes appear as suggested videos.

Quite a few folks have asked me how I feel about this newer policy, which aims to prevent this category of videos from being suggested by YouTube’s algorithms, unless a viewer is already subscribed to the YouTube channels that uploaded the videos in question.

The policy will take time to implement given the significant number of videos involved and the complexities of classification, but I feel that overall this new policy regarding these videos is an excellent compromise.

If you’re a subscriber to a conspiracy video hosting channel, conspiracy videos from that channel would still be suggested to you.

Otherwise, if you don’t subscribe to such channels, you could still find these kinds of videos if you purposely search for them — they’re not being removed from YouTube.

A balanced approach to a difficult problem. Great work!

–Lauren–

Another Massive Google User Trust Failure, As They Kill Louisville Fiber on Short Notice

Views: 434

It’s getting increasingly difficult to keep up with Google’s User Trust Failures these days, as they continue to rapidly shed “inconvenient” users faster than a long-haired dog. I do plan a “YouTube Live Chat” to discuss these issues and other Google-related topics, tentatively scheduled for Tuesday, February 12 at 10:30 AM PST. The easiest way to get notifications about this would probably be to subscribe to my main YouTube channel at: https://www.youtube.com/vortextech (be sure to click on the “bell” after subscribing if you want real time notifications). I rarely promote the channel but it’s been around for ages. Don’t expect anything fancy.

In the meantime, let’s look at Google’s latest abominable treatment of users, and this time it’s users who have actually been paying them with real money!

As you probably know, I’ve recently been discussing Google’s massive failures involving the shutdown of Google+ (“Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening” – https://lauren.vortex.com/2019/02/04/google-users-panic-over-google-deletion-emails-heres-whats-actually-happening).

Google has been mistreating loyal Google users — among the most loyal that they have and who often are decision makers about Google commercial products — in the process of the G+ shutdown on very short notice.

One might think that Google wouldn’t treat their paying customers as badly — but hey, you’d be wrong.

Remember when Google Fiber was a “thing” — when cities actually competed to be on the Google Fiber deployment list? It’s well known that incumbent ISPs fought against Google on this tooth and nail, but there was always a suspicion that Google wasn’t really in this for the long haul, that it was really more of an experiment and an effort to try jump start other firms to deploy fiber-based Internet and TV systems.

Given that the project has been downsizing for some time now, Google’s announcement today that they’re pulling the plug on the Louisville Google Fiber system doesn’t come as a complete surprise.

But what’s so awful about their announcement is the timing, which shows Google’s utter contempt for their Louisville fiber subscribers, on a system that only got going around two years ago.

Just a relatively short time ago, in August 2018, Google was pledging to spend the next two years dealing with the fiber installation mess that was occurring in their Louisville deployment areas (“Google Fiber announces plan to fix exposed fiber lines in the Highlands” – https://www.wdrb.com/news/google-fiber-announces-plan-to-fix-exposed-fiber-lines-in/article_fbc678c3-66ef-5d5b-860c-2156bc2f0f0c.html).

But now that’s all off. Google is giving their Louisville subscribers notice that they have only just over two months before their service ends. Go find ye another ISP in a hurry, oh suckers who trusted us!

Google will provide those two remaining months’ service for free, but that’s hardly much consolation for their subscribers who now have to go through all the hassles of setting up alternate services with incumbent carriers who are laughing their way to the bank.

Imagine if one of those incumbent ISPs like a major telephone or cable company tried a shutdown stunt like this with notice of only a couple of months? They’d be rightly raked over the coals by regulators and politicians.

Google claims that this abrupt shutdown of the Louisville system will have no impact on other cities where Google Fiber is in operation. Perhaps so — for now. But as soon as Google finds those other cities “inconvenient” to serve any longer, Google will most likely trot out the guillotines to subscribers in those cities in a similar manner. C’mon, after treating Louisville this way, why should Fiber subscribers in other cities trust Google when it comes to their own Google-provided services?

Ever more frequently now, this seems to be The New Google’s game plan. Treat users — even paying users — like guinea pigs. If they become inconvenient to care for, give them a couple of months notice and then unceremoniously flush them down the toilet. Thank you for choosing Google!

Google is day by day becoming unrecognizable to those of us who have long felt it to be a great company that cared about more than just the bottom line.

Googlers — the rank and file Google employees and ex-employees whom I know — are still great. Unfortunately, as I noted in “Google’s Brain Drain Should Alarm Us All” (https://lauren.vortex.com/2019/01/12/googles-brain-drain-should-alarm-us-all), some of their best people are leaving or have recently left, and it becomes ever more apparent that Google’s focus is changing in ways that are bad for consumer users and causing business users to question whether they can depend on Google to be a reliable partner going forward (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google).

In the process of all this, Google is making itself ever more vulnerable to lying Google Haters — and to pandering politicians and governments — who hope to break up the firm and/or suck in an endless money stream of billions in fines from Google to prop up failing 20th century business models.

The fact that Google for the moment is still making money hand over fist may be partially blinding their upper management to the looming brick wall of government actions that could potentially stop Google dead in its tracks — to the detriment of pretty much everyone except the politicos themselves.

I remain a believer that suggested new Google internal roles such as ombudspersons, user advocates, ethics officers, and similar positions — all of which Google continues to fight against creating — could go a long way toward bringing balance back to the Google equation that is currently skewing ever more rapidly toward the dark side.

I continue — perhaps a bit foolishly — to believe that this is still possible. But I am decreasingly optimistic that it shall come to pass.

–Lauren–

Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening

Views: 5263

Two days ago I posted “Google’s Google+ Shutdown Emails Are Causing Mass Confusion” (https://lauren.vortex.com/2019/02/02/googles-google-shutdown-emails-are-causing-mass-confusion) — and the reactions I’m receiving make it very clear that the level of confusion and panic over this situation by vast numbers of Google users is even worse than I originally realized. My inbox is full of emails from worried users asking for help and clarifications that they can’t find or get from Google (surprise!) — and my Google+ (G+) threads on the topic are similarly overloaded with desperate comments. People are telling me that their friends and relatives have called them, asking what this all means.

Beyond the user trust abusive manner in which Google has been conducting the entire consumer Google+ shutdown process (even their basic “takeout” tool to download your own posts is reported to be unreliable for G+ downloads at this point), their notification emails, which I had long urged be sent to provide clarity to users, instead were worded in ways that have massively confused many users, enormous numbers of whom don’t even know what Google+ actually is. These users typically don’t understand the manners in which G+ is linked to other Google services. They understandably fear that their other Google services may be negatively affected by this mess.

Since Google isn’t offering meaningful clarification for panicked users — presumably taking its usual “this too shall pass” approach to user support problems — I’ll clarify this all as succinctly as I can — to the best of my knowledge — right here in this post.

UPDATE (February 5, 2019): Google has just announced that the Web notification panel primarily used to display G+ notifications will be terminated this coming March 7. This cuts another month off the useful life of G+, right when we’ll need notifications the most to coordinate with our followers for continuing contacts after G+. Without the notification panel, this will be vastly more difficult, since the alternative notifications page is very difficult to manage. No apologies. No nuthin’. First it was August. Then April. Now March. Can Google mistreat consumer users any worse? You can count on it!

Here’s an important bottom line: Core Google Services that you depend upon such as Gmail, Drive, Photos, YouTube, etc. will not be fundamentally affected by the G+ shutdown, but in some cases visible effects may occur due to the tight linkages that Google created between G+ and other services.

No, your data on Gmail or Drive won’t be deleted by the Google+ shutdown process. Your uploaded YouTube videos won’t be deleted by this.

However, outside of the total loss of user trust by loyal Google+ users, triggered by the kick in the teeth of the Google+ shutdown (without even provision of a tool to help with followers migration – “If Google Cared: The Tool That Could Save Google+ Relationships” (https://lauren.vortex.com/2019/02/01/if-google-cared-the-tool-that-could-save-google-relationships), there will be a variety of other Google services that will have various aspects “break” as a result of Google’s actions related to Google+.

To understand why, it’s important to understand that when Google+ was launched in 2011, it was positioned more as an “identity” product than a social media product per se. While it might have potentially competed with Facebook in some respects, creating a platform for “federated” identity across a wide variety of applications and sites was an important goal, and in the early days of Google+, battles ensued over such issues as whether users would continue to be required to use their ostensibly “real” names for G+ (aka, the “nymwars”).

Google acted to integrate this identity product — that is, Google+ — into many Google services and heavily promoted the use of G+ “profiles” and widgets (comments, +1 buttons, “follow” buttons, login functions, etc.) for third-party sites as well.

In some cases, Google required the creation of G+ profiles for key functions on other services, such as for creating comments on YouTube videos (a requirement that was later dropped as user reactions in both the G+ and YouTube communities where overwhelmingly negative).

Now that consumer G+ has become an “inconvenience” to Google, they’re ripping it out by the roots and attempting to completely eliminate any evidence of its existence, by totally removing all G+ posts, comments, and the array of G+ functions that they had intertwined with other services and third-party sites.

This means that anywhere that G+ comments have continued to be present (including Google services like “Blogger”), those comments will vanish. Users whom Google had encouraged at other sites and services to use G+ profile identities (rather than the underlying Google Account identities) will find those capabilities and profiles will disappear. Sites that embedded G+ widgets and functions will have those capabilities crushed, and their page formats in many cases disrupted as a result. Photos that were stored only in G+ and not backed up into the mainstream Google Photos product will reportedly be deleted along with all the G+ posts and comments.

And then on top of all this other Google-created mayhem related to their mishandling of the G+ shutdown, we have those panic-inducing emails going out to enormous numbers of Google users, most of whom don’t understand them. They can’t get Google to explain what the hell is going on, especially in a way that makes sense if you don’t understand what G+ was in the first place, even if somewhere along the line Google finessed you into creating a G+ account that you never actually used.

There’s an old saying — many of you may have first heard it stated by “Scotty” in an old original “Star Trek” episode: “Fool me once, shame on you — fool me twice, shame on me!”

In a nutshell, this explains why so many loyal users of great Google services — services that we depend on every day — are so upset by how Google has handled the fiasco of terminating consumer Google+. This applies whether or not these users were everyday, enthusiastic participants in G+ itself (as I’ve been since the first day of beta availability) — or even if they don’t have a clue of what Google+ is — or was.

Even given the upper management decision to kill off consumer Google+, the actual process of doing so could have been handled so much better — if there was genuine concern about all of the affected users. Frankly, it’s difficult to imagine realistic scenarios of how Google could have bungled this situation any worse.

And that’s very depressing, to say the least.

–Lauren–