Recently, in Crushing the Internet Liars, I discussed issues relating to the proliferation of “fake news” on the Internet (via social media, search, and other means) and the relationship of personalization-based “filter bubbles” and “echo chambers” — among other effects.
A tightly related set of concerns, also rising to prominence during and after the 2016 election, are the even broader concepts of Internet-based hate speech and harassment. The emboldening of truly vile “Alt-Right” and other racist, antisemitic white supremacist groups and users in the wake of Trump’s election has greatly exacerbated these continuing offenses to ethics and decency (and in some cases, represent actual violations of law).
Lately, Twitter has been taking the brunt of public criticism regarding harassment and hate speech — and their newly announced measures to supposedly combat these problems seem to mostly be potentially counterproductive “ostrich head in the sand” tools that would permit offending tweets to continue largely unabated.
But all social media suffers from these problems to one degree or another, and I feel it is fair to say that no major social media firm really takes hate speech and harassment seriously — or at least as seriously as ethical firms must.
To be sure, all significant social media companies provide mechanisms for reporting abusive posts. Some systems pair these with algorithms that attempt to ferret out the worst offenders proactively (though hate users seem to quickly adapt to bypass these as rapidly as the algorithms evolve).
Yet one of the most frequent questions I receive regarding social media is “How do I report an abusive posting?” Another is “I reported that horrible posting days ago, but it’s still there, why?”
The answer to the first question is fairly apparent to most observers — most social media firms are not particularly interested in making their abuse reporting tools clear, obvious, and plainly visible to both technical and nontechnical users of all ages. Often you must know how to access posting submenus to even reach the reporting tools.
For example, if you don’t know what those three little vertical dots mean, or you don’t know to even mouse over a posting to make those dots appear — well, you’re out of luck (this is a subset of a broader range of user interface problems that I won’t delve into here today).
The second question — why aren’t obviously offending postings always removed when reported — really needs a more complex answer. But to put it simply, the large firms have significant problems dealing with abusive postings at the enormous scales of their overall systems, and the resources that they have been willing to put into the reporting and in some cases related human review mechanisms have been relatively limited — they’re just not profit center items.
They’re also worried about false abuse reports of course — either purposeful or accidental — and one excuse used for “hiding” the abuse reporting tools may be to try reduce those types of reports from users.
All that having been said, it’s clear that the status quo when it comes to dealing with hate speech or harassing speech on social media is no longer tenable.
And before anyone has a chance to say, “Lauren, you’re supposed to be a free speech advocate. How can you say this?”
Well, it’s true — I’m a big supporter of the First Amendment and its clauses regarding free speech.
But what is frequently misunderstood, is that this only applies to governmental actions against free speech — not to actions by individuals, private firms, or other organizations who are not governmental entities.
This is one reason why I’m so opposed to the EU’s horrific “Right To Be Forgotten” (RTBF) — it’s governments directly censoring the speech of third parties. It’s very wrong.
Private firms though most certainly do have the right to determine what sorts of speech they choose to tolerate or support on their platforms. That includes newspapers, magazines, conventional television networks, and social media firms, to name but a few.
And I assert that it isn’t just the right of these firms to stamp out hate speech and harassment on their platforms, but their ethical responsibility to do so as well.
Of course, if the Alt-Right or other hate groups (and certainly the right-wing wackos aren’t the only offenders) want to establish their own social media sites for that subset of hate speech that is not actually illegal — e.g. the “Trumpogram” service — they are free to do so. But that doesn’t mean that the Facebooks, Googles, and Twitters of the world need to permit these groups’ filth on their systems.
Abusive postings in terms of hate speech and harassing speech certainly predate the 2016 election cycle, but the election and its aftermath demonstrate that the major social media firms need to start taking this problem much more seriously — right now. And this means going far beyond rhetoric or public relations efforts. It means the implementation of serious tools and systems that will have real and dramatic impacts on helping to stamp out the postings of the hate and other abuse mongers in our midst today.
–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!