The Right’s (and Left’s) Insane Internet Content Power Grab

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Another Breach: What Capital One Could Have Learned from Google's "BeyondCorp"
How Some Software Designers Don't Seem to Care About the Elderly