Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home
Solving Google's, Facebook's, and Twitter's Russian (and other) Ad Problems