Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

Views: 2057

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 10, 2018): Reacting to the controversies surrounding Duplex, Google today announced that the system will identify itself to called parties. I applaud this decision.

– – –

I’ve already received many reactions to my posting yesterday of “Google’s New ‘Duplex’ Voice System Creates Some Troublesome Questions” (https://lauren.vortex.com/2018/05/08/googles-new-duplex-voice-system-creates-some-troublesome-questions) — discussing Google’s new system capable of conducting telephone conversations that may be indistinguishable from human callers. This is fascinating work on Google’s part and a potentially very positive use of their AI assets and expertise, but as I noted also carries significant risks for confusion and potential abuse.

With no exceptions so far, the sense of these reactions has confirmed what I suspected — that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line.  To use the vernacular: “Don’t try to con me, bro!”

Luckily, there’s a relatively simple way to fix this problem at this early stage — well before it becomes a big issue impacting many lives.

I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google’s Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being — the exact wording of that announcement to be determined.

I would also assert that any similar systems from other firms that can be easily confused with human callers provide similar start of call announcements.

I’d enormously prefer that all affected firms agree to provide these warnings voluntarily, but would support carefully drafted legislation or other regulatory actions to require their use if a voluntary approach proves to be insufficient.

Getting ahead of the potential pitfalls in this technology is crucial to help assure that these systems are widely accepted and able to achieve their intended beneficial results, rather than turning into a painful policy flashpoint down the line.

–Lauren–

Google’s New “Duplex” Voice System Creates Some Troublesome Questions

Views: 2012

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 9, 2018): Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

– – –

Google today demonstrated a project of theirs under development called “Duplex” — which permits their Google Assistant infrastructure to conduct unassisted two-way voice phone calls for tasks like making appointments. In their demos, employees at the firms being called apparently had no idea that they were talking to a machine. Very impressive, indeed.

But though I’m personally a fan of Assistant — I have several Assistant “Home” devices myself, plus Assistant on my phones — something about those demos today made me immediately uneasy. When I mentioned this on my mailing lists during the day, I was surprised by how many people responded with variations of “I’d be upset if I was conned into thinking that a real person was calling me when it was really a computer.”

And yeah, it would bug me too. I’m not completely sure why. I suspect it’s an aspect of human nature — and Google tends to occasionally have something of a blind spot in that respect. My guess is that most people don’t much mind talking to machines so long as they know that they’re talking to machines. But Duplex even throws in natural sounding “uh” utterances and such — technically perhaps sometimes to cover processing delays, but there’s no denying that humans would tend to perceive these as direct attempts to convince them that a human was at the other end of the line.

There was something else. I knew that I had seen something like this before, in an old sci-fi movie. I couldn’t find references on Google that matched, so I had to use my own unaided brain for the lookup. I finally got it.

In the 1977 film “Demon Seed” a woman is held captive in her own home by an advanced AI system created by her estranged husband, and in control of all the smart windows, doors, and other appliances of the house. Her husband isn’t present. He has no idea that this is happening. At various points in the plot, this system makes video phone calls and interacts with persons at the front door via an intercom, in all cases convincing them that they were actually speaking with the woman herself and that all was well.

Leaving aside the sci-fi melodrama, there are some interesting questions here. Is it ethical for computerized systems to interact with humans in a manner that pretends to be human? Even if it’s ethical, is it desirable? What are the possible blowbacks and undesirable possibilities?

Perhaps we can ask the same sort of question traditionally asked of undercover cops: “Are you a police officer?” In our case, we wonder how the system would respond if the called party asked Duplex “Are you a human being?”

Of course it’s relatively early in the development of this tech. But perhaps not too early to begin thinking about these questions. Google itself suggested that an appropriate level of “transparency” would be needed for these systems — I wholeheartedly agree. But what does that actually mean? Perhaps an announcement at the start of each call informing the called party that there were talking to an automated system? The various implications are decidedly nontrivial.

And we must also be thinking about how such systems could be purposely misused. Obviously not in the dramatic manner of that film I mentioned above, but in other ways that involve scams and frauds of various kinds, perhaps as a sort of evil evolution of the current robocalling scourge.

Yet technology itself is never actually good or evil — it’s how we choose to use tech that sets these parameters. And there are all sorts of ways that Duplex could do a lot of good — if it is deployed in manners that help people without trying to fool them, and if it has sufficient safeguards to minimize the risks of abusive applications.

Google has their work cut out for them on this one, because Duplex is a quintessential example of an AI system where getting it working is only half the battle. The other half is assuring to the greatest degree possible that it’s only used for good purposes, and not in evil ways.

–Lauren–

Phishing Risks in Gmail’s New “Confidential Mode”

Views: 1884


In my previous post — “New Gmail Features That May Be ILLEGAL for You to Use!” (https://lauren.vortex.com/2018/05/05/new-gmail-features-that-may-be-illegal-for-you-to-use), I noted that I’m very pleased overall with Google’s new Gmail, but also explained how using specific features of new Gmail — in particular “confidential mode” — may have negative legal implications for some users, especially those with legally-mandated data retention requirements.

But there’s another potential issue with “confidential mode” that could be a much broader risk to many more persons — the possibility that this mode will be leveraged by crooks for a new wave of phishing emails designed to spread malware and steal personal information.

The problem arises since non-Gmail users cannot directly receive Gmail confidential mode messages. Instead — as reported in the article that I linked to in my previous post — when a Gmail user wants to send a non-Gmail user such a message, the non-Gmail user is instead sent a link, that when clicked takes them to Google’s servers where they can read the confidential mode message in their browser.

The potential risks for any service that operates in this way are obvious.

Those of us working on Internet security and privacy have literally spent many years attempting to train users to avoid clicking on “to read the message, click here” links in emails that they receive. Criminals have simply become too adept at creating fraudulent emails that lead to phishing and malware sites.

Our efforts have had only limited success. Millions of busy users even click on such fake links in messages that are not even particularly “well crafted” or seem “obviously” fake as viewed by those of us who have long been dealing with such issues.

Any email system that depends on users clicking on emailed links to access remotely hosted email messages is likely to be subject to such criminal abuses. No matter how much diligence Google employs to try avoid users being fooled by phishing messages masquerading as links to Gmail confidential mode messages, we can be reasonably sure that this could ultimately represent a significant new vector of attack for phishing operations that will successfully ensnare large numbers of Internet users over time.

Given the relatively poor success that educational and message authentication efforts have had to date in these respects, there are no obvious solutions immediately apparent if Google operates their Gmail confidential mode in the manner described.

–Lauren–

New Gmail Features That May Be ILLEGAL for You to Use!

Views: 4417

UPDATE (May 7, 2018): Phishing Risks in Gmail’s New “Confidential Mode”

– – –

I’m a fan of Google’s new Gmail — after switching over I’ve never felt the need to switch back to classic Gmail, not even once ( “My Initial Impressions of Google’s New Gmail User Interface” – https://lauren.vortex.com/2018/04/25/my-initial-impressions-of-googles-new-gmail-user-interface).

An aspect of the new Gmail that has been getting a lot of attention relates to the various features associated with Gmail’s new “confidential” mode, which includes mechanisms that can enable some user-specified limits (with various provisos regarding their effectiveness in assorted scenarios) on how Gmail messages are handled locally and by recipients, including a kind of “email expiration” system, among other features. A good summary of confidential mode functions is at: https://mashable.com/2018/04/27/new-gmail-expiring-emails-confidential-mode on the “Mashable” site.

The features of confidential mode come with various positive attributes and a number of limitations, but for some users may also be something else — they may be ILLEGAL for them to actually use!

It doesn’t take rocket science to see why.

Many people who use Gmail (and/or correspond with persons who use Gmail) work in fields or for organizations that have legally-mandated data retention requirements — and these often include all email communications.

And it’s important to note that we’re not talking only about the Gmail users themselves. A non-Gmail user receiving a confidential mode email from a Gmail user would apparently only receive a link to the actual message on Google’s servers — and that message could vanish later depending on the Gmail user’s settings (the recipients would have no control over this). Yeah, users could always screenshot or photograph these messages, but the legal implications of suddenly vanishing messages are not necessarily particularly subtle ones.

While Gmail users in most instances have always had the ability to delete their own copies of emails, the ability to suddenly withdraw access to a sent email by a recipient on both Gmail or non-Gmail systems will be new to most users, and the complex ramifications of using such features may not be obvious to many of them.

It is imperative that both users of regular Gmail and Google’s paid G Suite offerings are adequately warned by Google regarding the potential legal pitfalls of using confidential mode. I’d suggest “in your face” warnings at least the first time that users attempt to use associated features.

Many independent professionals who may be subject to legal restrictions in these contexts use regular Gmail. The situation is even more complicated with G Suite. Its users include both businesses large and small, and all manner of other organizations including significant numbers of government users — any of whom may be using their own domain names for G Suite emails.

It will be critical that G Suite users and account administrators aren’t only directly warned about the implications of using these confidential mode features, but that administrators also have the means to restrict the use of these features by any or all of the users associated with those accounts.

This is definitely an area where proactive educational and technical steps by Google now could potentially avoid a lot of hassle — or much worse — for a significant number of their users down the line.

–Lauren–

Confused Google Announcements Regarding Retirement of Google+ Apps?

Views: 962


I’m a heavy user of Google+. I’ve never been a Facebook user, and I’ve been on Google+ since the first day of beta availability back in 2011. So naturally many G+ users contact me when they have relevant concerns.

I’m now getting emails and other messages from concerned, alarmed, and possibly confused G+ users — and I’m seeing public posts and articles that may be similarly confused — about a series of emailed warnings that G is sending out about G+ apps. I’ve also received those emailed warnings from Google directly. There’s a lot of chatter that these messages are signaling a step toward killing off G+ (which given recent updates to G+ overall, seems unlikely to me at this time). The relevant portion of the emails is worded like this:

Effective May 20, 2018, Google+ for Android version 7.5 and older and Google+ for iOS version 5.0 and older will no longer work. Note that if you chose to have the photo backup setting enabled in Google+, your photos will no longer be backed up.

Many people are assuming that this means the G+ app will stop working for Android versions 7.5 and below. But this doesn’t make any sense to me. I’ve never even seen an Android version 7.5 system. And as of February this year, the breakdown of Android usage is reportedly around 81.2% for Android versions 5.0->7.1.x.

I’ve personally been assuming that these messages actually are referring to old versions of the G+ app, not of Android versions, but the wording is definitely unclear. I can’t believe that G actually would kill the G+ app for the vast majority of Android users.

There’s definitely growing public confusion about this, and clarification from Google — one way or another — would be greatly appreciated.

–Lauren–