With all of the current discussions regarding the false and fake news glut on the Internet — often racist in nature, some purely domestic in origin, some now believed to be instigated by Putin’s Russia — it’s obvious that the status quo for dealing with such materials is increasingly untenable.
But what to do about all this?
As I have previously discussed, my general view is that more information — not less — is the best solution to these distortions that may have easily turned the 2016 election on its head.
Labeling, tagging, and downranking of clearly false or fake posts is an approach that can help to reduce the tendency for outright lies to be treated equivalently with truth in social media and search engines. These techniques also avoid invoking the actual removal of lying items themselves and the “censorship” issues that then may come into play (though private firms quite appropriately are indeed free to determine what materials they wish to permit and host — the First Amendment only applies to governmental restraints on speech in the USA).
How effective might such labeling be? Think about the labeling of “fake news” in the same sort of vein as the health warnings on cigarette packs. We haven’t banned cigarettes. Some people ignore the health warnings, and many people still smoke in the USA. But the number of people smoking has dropped dramatically, and studies show that those health warnings have played a major role in that decrease.
Labeling fake and false news to indicate that status — and there’s a vast array of such materials where no reasonable arguments that they are not untrue can reasonably exist — could have a dramatic positive impact. Controversial? Yep. Difficult? Sure. But I believe that this can be approached gradually, starting with top trending stories and top search results.
A cure-all? No, just as cigarette health warnings haven’t been cure-alls. But many lives have still been saved. And the same applies to dealing with fake news and similar lies masquerading as truthful posts.
Naysayers suggest that it’s impossible to determine what’s true or isn’t true on the Internet, so any attempts to designate anything that’s posted as really true or false must fail. This is nonsense. And while I’ve previously noted some examples (Man landing on the moon, Obama born in Hawaii) it’s not hard to find all manner of politically-motivated lies that are also easy to ferret out as well.
For example, if you currently do a Google search (at least in the USA) for:
southern poverty law center
You will likely find an item on the first page of results (even before some of the SPLC’s own links) from online Alt-Right racist rag Breitbart — whose traditional overlord Steve Bannon has now been given a senior role in the upcoming Trump administration.
The link says:
FBI Dumps Southern Poverty Law Center as Hate Crimes Resource
Actually, this is a false story, dating back to 2014. It’s an item that was also picked up from Breitbart and republished by an array of other racist sites who hate the good work of the SPLC fighting both racism and hate speech.
Now, look elsewhere on that page of Google search results — then on the next few pages. No mention of the fact that the original story is false, that even the FBI itself issued a statement noting that they were still working with the SPLC on an unchanged basis.
Instead of anything to indicate that the original link is promoting a false story, what you’ll mostly find on succeeding pages is more anti-SPLC right-wing propaganda.
This situation isn’t strictly Google’s fault. I don’t know the innards of Google’s search ranking algorithms, but I think it’s a fair bet that “truth” is not a major signal in and of itself. More likely there’s an implicit assumption — which no longer appears to necessarily hold true — that truthful items will tend to rise to the top of search results via other signals that form inputs to the ranking mechanisms.
In this case, we know with absolute certainly that the original story on page one of those results is a continuing lie, and the FBI has confirmed this (in fact, anyone can look at the appropriate FBI pages themselves and categorically confirm this fact as well).
Truth matters. There is no equivalency between truth and lies, or otherwise false or faked information.
In my view, Google should be dedicated to the promulgation of widely accepted truths whenever possible. (Ironic side note: The horrible EU “Right To Be Forgotten” — RTBF — that has been imposed on Google, is itself specifically dedicated to actually hiding truths!)
As I’ve suggested, the promotion of truth over lies could be accomplished both by downranking of clearly false items, and/or by labeling such items as (for example) “DEEMED FALSE” — perhaps along with a link to a page that provides specific evidence supporting that label (in the SPLC example under discussion, the relevant page of the FBI site would be an obvious link candidate).
None of this is simple. The limitations, dynamics, logistics, and all other aspects of moving toward promoting truth over lies in social media and search results will be an enormous ongoing effort — but a critically crucial one.
The fake news, filter bubbles, echo chambers, and hate speech issues that are now drowning the Internet are of such a degree that we need to call a major summit of social media and search firms, experts, and other concerned parties on a multidisciplinary basis to begin hammering out practical industry-wide solutions. Associated working groups should be established forthwith.
If we don’t act soon, we will be utterly inundated by the false “realities” that are being created by evil players in our Internet ecosystems, who have become adept at leveraging our technology against us — and against truth.
There is definitely no time to waste.
–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!