UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.
UPDATE (May 31, 2018): Google and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document
– – –
Many years ago, I was the systems guy who ran the early UNIX minicomputers in the basement of Santa Monica’s RAND Corporation. While RAND at the time derived the vast majority of its income from Department of Defense contracts, I was there despite my lifelong refusal to work directly on military-related projects (to the significant detriment of my own income, I might add). RAND spoke truth to power. DoD could contract with RAND for a report on some given topic, but RAND wouldn’t skew a report to reach results that the contractor had hoped for. I admired that.
One midday I was eating lunch in an open patio between the offices there, chatting with a couple of the military research guys. At the time, one focus of DoD interest was use of mainframe and minicomputer systems to analyze battlefield data, such as it was back then. My lunchmates assured me that their work was all defensive in nature.
I asked how they could be sure that the same analytical systems they intended for defense couldn’t also by used by the military for actually killing people. “We have to trust them,” came the reply. “The technology is inherently dual use.”
It seemed to me that battlefield data analysis was fundamentally different from the DoD-funded projects I also worked on — with ARPANET being the obvious example. Foundational communications research is not in the same category as calculating how to more efficiently kill your enemy. At least that’s how I felt at the time, and I still feel that way. There’s nothing inherently evil in accepting money from DoD — the ethical issues revolve around the specifics of the projects involved.
Fast forward to the controversy that has arisen today, about which I’ve been flooded with queries — word that Google has been engaged in “Project Maven” for DoD, using Google AI/Machine Learning tech to analyze footage from military drones. Apparently this wasn’t widely known even internally at Google, until the topic recently found its way to internal discussion groups and then leaked to the public. Needless to say, there reportedly has been quite considerable internal controversy about this, to say the least.
“How do you feel about this, Lauren?” I’m being asked.
Since I frequently play armchair ethicist, I’ve been giving this question a lot of thought today.
The parallels with that lunch discussion at RAND so long ago seem striking. The military wanted to analyze battlefield data back then, and they want to analyze military drone data now.
There are no simple answers.
But we can perhaps begin with the problem of innocent civilian deaths resulting from U.S. drone strikes. We know that the designated terrorist targets are frequently purposely embedded in civilian areas, and often travel with civilians who have little or no choice in the matter — such as children and other family members.
While the Pentagon (as they did during the Vietnam war) makes a grand show about body counts, it’s not clear that most of these drone strikes have much long-term anti-terrorism impact. The targets are frequently fungible — kill one leader and another moves right in. Liquidate one bomb maker and the position is quickly filled by another.
So, ethical question #1: Are these drone strikes justifiable at all? To answer this question honestly, we must of course consider the rate of collateral civilian deaths and injuries, which are sure to inspire further anti-U.S. rhetoric and attacks.
My personal belief is that in most cases — at least to the extent that we in the public are aware — the answer to this question is generally no.
Which brings us to ethical question #2 (or rather, a set of questions): Does supplying advanced image processing and analysis systems for military drone data fall into an ethically acceptable category, provided that such analysis is not specifically oriented toward targeting for lethal operations? Can it be reasonably argued that more precise targeting could also help to prevent civilian casualties, even when those civilians are in immediate proximity to the intended targets? Or is providing such facilities also ethical even if direct lethal operations are known in advance to be the likely result, toward the advancement of currently stated U.S. interests?
And after all, much of our technology today can be easily repurposed in ways that we technologists had not intended — for example, for oppressive governments to surveil and censor their own citizens.
Yet the immense potential power of rapidly advancing AI and Machine Learning systems do cast these kinds of issues in a new and qualitatively different kind of light. And that’s even if we leave aside a business-based analysis that some firms might make, noting that if they don’t provide the services, some other company will do so anyway, and get the contracts as well as the income.
I know absolutely nothing about Google’s participation in Project Maven other than what I’ve seen in public sources today.
But to try address the gist of my own questions from just above, based on what I know right now, I believe that Google has a significant ethical quandary on their hands in this regard.
I personally doubt that this kind of powerful tech can be constrained through contractual relationships to purely defensive use. I also feel that the decision regarding whether or not any given firm is willing to accept that its technology may be used for lethal purposes is one that should be made “eyes wide open” — and is worthy of nothing less than effectively a significant level of company-wide consensus before proceeding.
It has been ages since I even thought about that long ago lunch conversation at RAND. It’s indeed disquieting to be thinking about it again today.
Be seeing you.
–Lauren–