Google’s New AI Principles Are a Model for the World

In the wake of Google’s announcement that they will not be renewing their controversial “Project Maven” military AI contract when it expires next year (“Google — and the Defense Department’s Disturbing ‘Maven’ A.I. Project Presentation Document” – https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document), Google has now published a post describing their policy positions regarding AI at Google going forward: “Artificial Intelligence at Google: Our Principles” (https://www.blog.google/topics/ai/ai-principles).

Since I was on balance critical of Google’s participation in Project Maven, but am very supportive of AI overall (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all), I’ve received a bunch of queries from readers asking how I feel about Google’s newly announced AI principles statement.

“Excellent” is my single word summary, especially in terms of the principles being balanced — and above all — realistic.

AI will be a critical tool going forward, both in terms of humanity and the global ecosystem itself. And like any tool — reaching all the way back to a chunk of rock on the ground in a prehistoric cave — AI can be used for good purposes, evil purposes, and in a range of “gray area” scenarios that are more difficult to cleanly categorize one way or the other.

It’s this last set of concerns, especially AI applications with multiple uses, that I’m particularly glad to see Google addressing specifically in their principles post.

For those of us who aren’t psychopaths or sociopaths, most fundamental characteristics of good and evil are usually fairly obvious. But as one grows older, it becomes apparent that the real world is not typically made up of black and white situations where one or another set of these characteristics exist in isolation — much more often we’re dealing with a complicated kaleidoscope of interrelating issues.

So — to address one point that I’ve been most asked about over the last couple of days regarding Google’s AI statement — it is entirely appropriate that Google explicitly notes that they will not be abandoning all aspects of government and military AI work, so long as that work is not likely to cause overall harm. 

In a “perfect” world we might not need the military — hell, we might not even need governments. But this is not a perfect world, and it’s one thing to use AI as a means to kill ever more people more efficiently, and something else entirely to use AI defensively to help protect against the genuine evils that still pervade this planet, as Google says it will do.

AI is still in its relative infancy, and attempts to accurately predict its development (beyond the very short term) are likely doomed to failure. AI principles such as Google’s will always by necessity be works in progress, and Google in fact explicitly acknowledges this fact.

But ethical firms and ethical governments around the world could today do themselves, their employees, and their citizens proud by accepting and living by AI principles such as those that Google has now announced.

–Lauren–

Why We May Have to Cut Europe Off from the Internet
A Modest Proposal: Identifying Europeans on the Internet for Their Protection