Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto.
While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.
Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.
Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.
In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.
Meanwhile, other demands being bandied about are equally specious.
Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.
Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications.
Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.
Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.
But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem.
Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.
In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).
The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?
The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites.
They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.
You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.
The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.
The battle lines are drawn.
2 thoughts on “Don’t Blame YouTube and Facebook for Hate Speech Horrors”
Yeah, I was shocked to learn that the New Zealand video was uploaded 1.5 million times in the period immediately after the crime. Many of those videos were each tweaked and adjusted slightly in order to get around Google’s content filters.
Now, it seems to me that if you have to tweak your video in order to get around Google’s rules, you KNOW you’re deliberately doing something wrong and breaking the rules. You know you’re committing a crime, only the crime isn’t yet illegal. Perhaps it should be. Perhaps it already is.
Hacking is illegal nowadays. If I use my knowledge of computers to get into a server or platform, against the wishes of the owner of that platform, it’s a crime. I see no difference between that and tweaking a video so that it gets into a server, onto a platform, against the wishes of the owner of that platform. You’re breaking or tricking a computer into letting you do something you’re not supposed to do on that platform. That’s hacking and it’s a crime.
Whether or not it’s technically hacking or technically a crime in this particular instance isn’t really the point — such uploads are abusive in any case.
Comments are closed.