Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

We find ourselves at a “teachable moment” in the history of Artificial Intelligence — we should not squander this opportunity. The global controversy that erupted over the last few days regarding Google’s AI-based  “Duplex” phone calling system can be viewed as a harbinger of things to come if a holistic approach to AI is not a fundamental design factor from the ground up.

The Duplex controversy should be calmed down at least for the moment. Google has now announced that calls made by Duplex will be identified as such to the called party, exactly what I had urged in: “Calls From Google’s ‘Duplex’ System Should Include Initial Warning Announcements” (https://lauren.vortex.com/2018/05/09/calls-from-googles-duplex-system-should-include-initial-warning-announcements).

While there are some observers asserting that Duplex-type technology should be banned — or required to use a “robotic” sounding voice not easily confused with a human — I consider both of those suggestions to be extreme, unnecessary, and counterproductive. This kind of technology can have a range of positive applications. I am very much a supporter of AI research (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

We want the voices to be as humanlike as possible to be as understandable as possible. Full disclosure that the calls are from AI-based Assistants is completely adequate to assuage most related concerns, though how this tech might potentially be abused by users in ways that makes the robocalling flood even worse is still an open question. 

How did we get here? Basically, while Google’s blog post regarding Duplex made a vague mention of transparency, their demos of the system played at Google I/O appeared to show called parties with absolutely no idea that they were talking to an AI. That’s mightily impressive as a showcase for Google’s AI advances.

But it was also immediately obvious to me — as soon as I heard those demos — that this was going to quickly blow up into a big, public mess that could have been easily avoided.  Because there was bound to be an emotional, visceral, negative reaction by many observers to hear a human “manipulated” in that manner by an AI system. It strikes to very heart of many persons’ fears of “intelligent” machines — ranging from sci-fi fantasies to legitimate real world concerns.

All Google needed to do to avoid this entire PR nightmare was to announce at I/O that the system would disclose itself to called parties, and play demos that included such disclosures.

Why didn’t Google do this? I don’t have any inside scoop, but I can make some fairly informed guesses.

Google still considers Duplex to be an experiment. That’s a valid point, but only takes us so far. If you’re only showing the tech internally, that one thing. But as soon as you make a public demo, you’ve lit the fuse, so to speak.

Several reporters who called me about this story asked me variations of the same question: “How come Google didn’t realize how much negative reaction there would be to those demos?” 

And my answer is that while Google’s heart is almost always in the right place when it comes technologies, sometimes they get a bit, shall we say, “overenthusiastic” — which can make it more difficult to anticipate the non-engineering aspects of a system and how it will be perceived.

This is not just a Google issue. It’s a endemic problem at many engineering-oriented firms, and as a techie myself I’ve had to push back sometimes against my own overly enthusiastic feelings about some new technologies.

Over the years, I’ve numerous times been in the position of trying to point out these kinds of problems on the Net and elsewhere. It’s a tough sell. We engineering types can be tightly focused on solving engineering problems, and we don’t like being told that perhaps we should broaden our focus a bit. And then there’s the classic Silicon Valley attitude of pushing ahead whenever possible and just apologizing later when things go wrong. 

I’m concerned that these are no longer viable strategies — or at least that they’re ever more risky ones in our toxic political environment, where regulators and politicians increasingly see reason and technology as enemies to attack for their own political gains.

Firms like Google and so many others should be subjecting their product launches to more than just engineering and privacy reviews. We’ve reached a stage where some sort of what we might call “ethical” reviews are needed as well and just as routinely. There are many different ways that these could be accomplished, and I won’t detail them here right now.

But without these kinds of formal review mechanisms to consider the ethical impacts of these technologies, the probability of public and political blowback against both potentially very beneficial AI and other socially positive tech projects will be dramatically increasing.

–Lauren–