Don’t (For Now) Use Google’s New “Perspective” Comment Filtering Tool

I must be brief today, so I’ll keep this relatively short and get into details in another post. Google has announced (with considerable fanfare) public access to their new “Perspective” comment filtering system API, which uses Google’s machine learning/AI system to determine which comments on a site shouldn’t be displayed due to perceived high spam/toxicity scores. It’s a fascinating effort. And if you run a website that supports comments, I urge you not to put this Google service into production, at least for now.

The bottom line is that I view Google’s spam detection systems as currently too prone to false positives — thereby enabling a form of algorithm-driven “censorship” (for lack of a better word in this specific context) — especially by “lazy” sites that might accept Google’s determinations of comment scoring as gospel.

In fact, Google’s track record in this context remains problematic.

You can see this even from the examples that Google provides, where it’s obvious that any given human might easily disagree with Google’s machine-driven comment ranking decisions.

And as someone who deals with significant numbers of comments filtered by Google every day — I have nearly 400K followers on Google+ — I can tell you with considerable confidence that the problem isn’t “spam” comments that are being missed, it’s completely legitimate non-spam, nontoxic comments that are inappropriately marked as spam and hidden by Google.

Every day, I plow through lots of these (Google makes them relatively difficult to find and see), so that I can “resurface” completely reasonable comments from good people who have been marked as toxic spammers by Google spam detection false positives.

This is a bad situation, and widespread use of “Perspective” at this stage of its development would likely spread this problem around the world.

For in fact, much worse than letting a spam or toxic comment through, is the AI-based muzzling of a comment and commenter who was completely innocent and falsely condemned by the machine, where a human would not have done so.

“Vanishing” of innocent, legit comments through overaggressive algorithms can lead to misunderstandings, confusion, and a general lack of trust in AI systems — and this kind of trust failure can be dangerous for users and the industry, since AI’s potential for greatness toward improving our world is indeed very real.

I’ll have more to say about this later, but for now, while you should of course feel free to experiment with the Google Perspective API, I urge you not to deploy it to any running production systems at this time.

Be seeing you.

–Lauren–

Does Google Hate Old People?
Please Tell Me Your Google Experiences For "Google 2017" Report