UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided
UPDATE (May 10, 2018): Reacting to the controversies surrounding Duplex, Google today announced that the system will identify itself to called parties. I applaud this decision.
– – –
I’ve already received many reactions to my posting yesterday of “Google’s New ‘Duplex’ Voice System Creates Some Troublesome Questions” (https://lauren.vortex.com/2018/05/08/googles-new-duplex-voice-system-creates-some-troublesome-questions) — discussing Google’s new system capable of conducting telephone conversations that may be indistinguishable from human callers. This is fascinating work on Google’s part and a potentially very positive use of their AI assets and expertise, but as I noted also carries significant risks for confusion and potential abuse.
With no exceptions so far, the sense of these reactions has confirmed what I suspected — that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line. To use the vernacular: “Don’t try to con me, bro!”
Luckily, there’s a relatively simple way to fix this problem at this early stage — well before it becomes a big issue impacting many lives.
I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google’s Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being — the exact wording of that announcement to be determined.
I would also assert that any similar systems from other firms that can be easily confused with human callers provide similar start of call announcements.
I’d enormously prefer that all affected firms agree to provide these warnings voluntarily, but would support carefully drafted legislation or other regulatory actions to require their use if a voluntary approach proves to be insufficient.
Getting ahead of the potential pitfalls in this technology is crucial to help assure that these systems are widely accepted and able to achieve their intended beneficial results, rather than turning into a painful policy flashpoint down the line.