A Proposal to Google: How to Stop Evil from Trending on YouTube

Views: 2006


Late last year, in the wake of the Las Vegas shooting tragedy (I know, keeping track of USA mass shootings is increasingly difficult when they’re increasingly common) I suggested in general terms some ways that YouTube could avoid spreading disinformation and false conspiracy theories after these kinds of events:

“Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news

I’ve also expressed concerns that YouTube’s current general user interface does not encourage reporting of hate or other abusive videos:

“How YouTube’s User Interface Helps Perpetuate Hate Speech” – https://lauren.vortex.com/2017/03/26/how-youtubes-user-interface-helps-perpetuate-hate-speech

Now, here we are again. Another mass shooting. Another spew of abusive, dangerous, hateful, false conspiracy and other related videos on YouTube that clearly violate YouTube’s Terms of Use but still managed to push high up onto YouTube trending lists — this time aimed at vulnerable student survivors of the Florida high school tragedy of around a week ago.

Google has stated that the cause for one of the worst of these reaching top trending YouTube status was an automated misclassification due to an embedded news report, that “tricked” YouTube’s algorithms into treating the entire video as legitimate.

No algorithms are perfect, and YouTube’s scale is immense. But this all begs the question — would a trained human observer have made the same mistake?

No. It’s very unlikely that a human who had been trained to evaluate video content would have been fooled by such an embedding technique.

Of course as soon as anyone mentions “humans” in relation to analysis of YouTube videos, various questions of scale pop immediately into focus.  Hundreds of hours of content are uploaded to YouTube every minute. YouTube’s scope is global, so this data firehose includes videos concerning pretty much any conceivable topic in a vast array of languages.

Yet Google is not without major resources in these regards. They’ve publicly noted that they have significantly-sized teams to review videos that have been flagged by users as potentially abusive, and have announced that they are in the process of expanding those teams.

Still, the emphasis to date has seemed to be on removing abusive videos “after the fact” — often after they’ve already quickly achieved enormous view counts and done significant damage to victims.

A more proactive approach is called for.

One factor to keep in mind is that while very large numbers of videos are continuously pouring into YouTube, the vast majority of these will never quickly achieve high numbers of views. These are what comprise the massive “long tail” of YouTube videos.

Conversely, at any given time only a relative handful of videos are trending “viral” and accumulating large numbers of views in very short periods of time.

While any and all abusive videos are of concern, as a practical matter we need to direct most of our attention to those trending videos that can do the most damage the most quickly.  We must not permit the long tail of less viewed videos to distract us from promptly dealing with abusive videos that are currently being seen by huge and rapidly escalating numbers of viewers.

YouTube employees need to be more deeply “in the loop” to curate trending lists much earlier in the process.

As soon as a video goes sufficiently viral to technically “qualify” for a trending list, it should be immediately pushed to humans — to the YouTube abuse team — for analysis before the video is permitted to actually “surface” on any of those lists.

If the video isn’t abusive or otherwise in violation of YouTube rules, onto the trending list it goes and it likely won’t need further attention from the team. But if it is in violation, the YouTube team would proactively block it from ever going onto trending, and would take other actions related to that video as appropriate (which could include removal from YouTube entirely, strikes or other actions against the uploading YouTube account, and so on).

There simply is no good reason today for horrifically abusive videos appearing on YouTube trending lists, and even worse in some cases persisting on those lists for hours, even rising to top positions — giving them enormous audiences and potentially doing serious harm.

Yes, fixing this will be significant work.

Yes, this won’t be cheap to do.

And yes, I believe that Google has the capabilities to accomplish this task.

The dismal alternative is the specter of knee-jerk, politically-motivated censorship of YouTube by governments, actions that could effectively destroy much of what makes YouTube a true wonder of the world, and one of my all-time favorite sites on the Internet.

–Lauren–

Why the Alt-Right Loves Google’s Diversity Conundrum

Views: 732


Google seems to be taking hits from all sides these days, and the announcement of another “diversity” lawsuit directed at the firm by an ex-employee only adds to the escalating mix.

The specific events related to these suits all postdate my consulting inside Google some years ago, but I know a lot of Googlers — among the best people I know, by the way — and I still have a pretty good sense of how Google’s internal culture functions.

Google is in a classic “damned if you do and damned if you don’t” position right now, exacerbated by purely political forces (primarily of the alt-right) that are attempting to leverage these situations to their own advantage — and ultimately to the disadvantage of Google, Google’s users, and the broader community at large.

This all really began with Google’s completely justified firing of alt-right darling James Damore after he internally promulgated what is now widely known as his “anti-diversity” memo.

The crux of the matter — as I see it, anyway — is that while Google’s internal discussion culture is famously vibrant and open (I can certainly attest to that myself!) — Google still has a corporate and ethical responsibility to provide a harassment-free workplace. That’s why Damore’s memo resulted in his termination.

But “harassment” (at least in a legal sense) doesn’t necessarily only apply to one side of these arguments.

To put this into more context, I need only think of various corporate environments that I’ve seen over my career, where it would have been utterly unthinkable to have the level of open discussion that is not only permitted by Google but encouraged there. At many firms today, Google’s internal openness in this regard would still be prohibited.

Many Googlers have never experienced such more typical corporate workplaces where open discussion of a vast range of topics is impractical or prohibited.

Yet even in an open discussion environment like Google’s, there have to be some limits. This is particularly true with personnel issues like diversity, that not only involve complex legal matters, but can be extremely sensitive personally to individual employees as well.

The upshot of all this — in my opinion — is that “public” internal personnel discussions per se are generally inappropriate for any corporate environment given the current legal and toxic political landscapes, especially with evil forces ready and willing to latch onto any leaks to further their own destructive agendas, e.g. as I discussed in “How the Alt-Right Plans to Control Google” — https://lauren.vortex.com/2017/09/29/how-the-alt-right-plans-to-control-google — and in other posts.

Personnel matters are much better suited to direct and private communications with corporate HR than for widely viewed internal discussion forums.

This isn’t a happy analysis for me. Most of us either know victims of harassment or have been harassed one way or another ourselves. And it’s clear that the kinds of harassment most in focus today are largely being encouraged by alt-right perpetrators, up to and including the sociopath currently in the Oval Office.

But in the long run, acting compulsively on our gut instincts in these regards — however noble those instincts may be — can be positively disastrous to our attempts to stop harassment and other evils. How and where these discussions take place can be fully as important as the actual contents of the discussions themselves. Insisting on such discussions within inappropriate environments, especially when complicated laws and “go for the jugular” external politics can be involved, is typically very much a losing tactic.

Overall, I believe that Google is handling this situation in pretty much the best ways that are actually possible today.

–Lauren–

“How-To” Videos — The Unsung Heroes of YouTube!

Views: 766


With so much criticism lately being directed at the more “unsavory” content on YouTube that I’ve discussed previously, it might be easy to lose track of why I’m still one of YouTube’s biggest fans.

Anyone could be forgiven for forgetting that despite highly offensive or even dangerous videos on YouTube that can attract millions of views and understandable public scrutiny, there are many other types of YT videos that attract much less attention but collectively do an incalculably large amount of good.

One example is YT’s utterly enormous collection of legitimate and incredibly helpful “How-To” videos — covering a breathtaking array of topics.

I’m not referring here to “formal” education videos — though these are also present in tremendous numbers and are usually very welcome indeed. Nor am I just now discussing product installation and similar videos often posted by commercial firms — though these are also often genuinely useful.

Rather, today I’d like to highlight the wonders of “informal” YT videos that walk viewers through the “how-to” or other explanatory steps regarding pretty much any possible topic involving computers, electronics, plumbing, automotive, homemaking, hobbies, sports — seemingly almost everything under the sun.

These videos are typically created by a cast and crew of one individual, often without any formal on-screen titles, background music, or other “fancy” production values.

It’s not uncommon to never see the faces of these videos’ creators. Often you’ll just see their hands at a table or workbench — and hear their informal voice narration — as they proceed through the learning steps of whatever topic that they wish to share.

These videos tend with remarkable frequency to begin with the creator saying “Hi guys!” or “Hey guys!” — and often when you find them they’ll only have accumulated a few thousand views or even fewer.

I’ve been helped by videos like these innumerable times over the years, likely saving me thousands of dollars and vast numbers of wasted hours — permitting me to accomplish by myself projects that otherwise would have been expensive to have done by others, and helping me to avoid costly repair mistakes as well.

To my mind, these kinds of “how-to” creators and their videos aren’t just among the best parts of YouTube, but they’re also shining stars that represent much of what we many years ago had hoped the Internet would grow into being.

These videos are the result of individuals simply wanting to share knowledge to help other people. These creators aren’t looking for fame or recognition — typically their videos aren’t even monetized.

These “how-to” video makers are among the very best not only of YouTube and of the Internet — but of humanity in general as well. The urge to help others is among our species’ most admirable traits — something to keep in mind when the toxic wasteland of Internet abuses, racism, politicians, sociopathic presidents — and all the rest — really start to get you down.

And that’s the truth.

–Lauren–

Facebook’s Very Revealing Text Messaging Privacy Fail

Views: 1317


As I’ve frequently noted, one of the reasons that it can be difficult to convince users to provide their phone numbers for account recovery and/or 2-step, multiple-factor authentication/verification login systems, is that many persons fear that the firms involved will abuse those numbers for other purposes.

In the case of Google, I’ve emphasized that their excellent privacy practices and related internal controls (Google’s privacy team is world class), make any such concerns utterly unwarranted.

Such is obviously not the case with Facebook. They’ve now admitted that a “bug” caused mobile numbers provided by users for multiple-factor verification to also be used for spamming those users with unrelated text messages. Even worse, when users replied to those texts their replies frequently ended up being posted on their own Facebook feeds! Ouch.

What’s most revealing here is what this situation suggests about Facebook’s own internal privacy practices. Proper proactive privacy design would have compartmentalized those phone numbers and associated data in a manner that would have prevented a “bug” like this from ever triggering such abuse of those numbers.

Facebook’s sloppiness in this regard has now been exposed to the entire world.

And naturally this raises a much more general concern.

What other sorts of systemic privacy design failures are buried in Facebook’s code, waiting for other “bugs” capable of freeing them to harass innocent Facebook users yet again?

These are all more illustrations of why I don’t use Facebook. If you still do, I recommend continuous diligence regarding your privacy on that platform — and lotsa luck — you’re going to need it!

–Lauren–

Blaming YouTube or the FBI for Yesterday’s School Shooting Tragedy Is Just Plain Wrong [Updated]

Views: 996

UPDATE (February 16, 2018): The FBI is reporting today that on January 5th of this year, they received a tip from an individual close to the shooter, specifically noting concerns about his guns and a possible school shooting. In sharp contrast to the single unverifiable YouTube comment discussed below that had been reported to the FBI, the very specific information apparently provided in the January tip is precisely the kind of data that should have triggered a full-blown FBI investigation. Since the information from this January tip reportedly was never acted upon, this dramatically increases FBI culpability in this case.

– – –

Before the blood had even dried in the classrooms of the Florida high school that was the venue for yet another mass shooting tragedy, authorities and politicians were out in force trying to assign blame everywhere.

That is, everywhere except for the fact that a youth too young to legally buy a handgun was able to legally buy an AR-15 assault-style weapon that he used to conduct his massacre.

Much of the misplaced blame this time is being lobbed at social media. The shooter, whom we now know had mental health problems but apparently had never been adjudicated as mentally ill, had a fairly rich social media  presence, so the talking heads are blaming firms like YouTube and agencies like the FBI for not “connecting the dots” to prevent this attack.

But the reality is that (as far as I can tell at this point) there wasn’t anything particularly remarkable about his social media history in today’s Internet environment.

There was — sad to say — nothing notable to differentiate his online activities from vast numbers of other profiles, posts, and comments that feature guns, knives, and provocatively “violent” types of statements. This is the state of the Net today — flooded with such content. When I block trolls on Google+, I usually first take a quick survey of their profiles. I’d say that at least 50% of the time they fall into the kinds of categories I’ve mentioned above.

We also know that 99+% of these kinds of users are not actually going to commit violent acts against people or property.

20/20 hindsight is great, but by definition it doesn’t have any predictive value in situations like this. Law enforcement couldn’t possibly have the resources to investigate every such posting.

In the case of this shooter, the FBI actually became involved since a YouTube user had expressed concern when a comment was left by someone (using the name of the shooter) saying “I’m going to be a professional school shooter.”

That’s not even an explicit threat. There’s no specified time or place. It’s very nasty, but not illegal to say. Social media is replete with far more explicit and scary statements that would be much more difficult to categorize as likely sarcasm or darkly joking around.

The FBI reportedly did a routine records search on that name (of course, anyone can post pretty much anything under any name), and found nothing relevant. To have expended more resources based only on that single comment didn’t make sense. Nor is there apparently any reason to believe that if they’d located that individual, then gone out and immediately interviewed him, that the course of later events would have been significantly changed.

We’re also hearing the refrain that authorities should have the right to haul in anyone reported to have mental stability issues of any kind, even if they’ve never been treated for mental illness or been arrested for any crime.

Well golly, these days that would probably include about four-fifths of the population, if not more. Pretty much everyone is nuts these days in our toxic social and political environments, one way or another.

The world is full of loonies, but these kinds of attacks only happen routinely here in the U.S. — and we all know in our hearts that the trivial availability of powerful firearms is the single relevant differentiating factor that separates us from the rest of the civilized world in this respect.

And that’s the tragic truth.

–Lauren–