Google — and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

– – –

A few months ago, in “The Ethics of Google and the Pentagon Drones” –  https://lauren.vortex.com/2018/03/06/the-ethics-of-google-and-the-pentagon-drones – I discussed some of the complicated nuances that can come into play when firms like Google engage with military contracts that are ostensibly for defensive purposes, but potentially could lead to offensive use of artificial intelligence technologies as well. This is not a simple matter. I was myself involved with Defense Department projects many years ago (including the Internet’s ancestor ARPANET itself), as I explained in that post.

The focal point for concerns inside Google in this regard (triggering significant internal protests and some reported resignations) revolve around the U.S. Department of Defense (DoD) “Project Maven” — aimed at using A.I. technology for drone image analysis, among other possibilities.

Now a 27 page DoD presentation document regarding Maven is in circulation, and frankly it is discomforting and disturbing to view. It is officially titled:

“Disruption in UAS: The Algorithmic Warfare Cross-Functional Team (Project Maven)”

And it sends a chill down my spine precisely because it seems to treat the topic rather matter-of-factly, almost lightheartedly.

There are photos of happy surfers. The project patch features smiling, waving cartoon robots who would fit right into an old episode of “The Jetsons” — with a Latin slogan that roughly translates to “Our job is to help.” Obviously DoD has learned a lesson from that old NSA mission patch that showed an enormous octopus with its tentacles draped around the Earth.

You can see the entire document here:

https://www.vortex.com/dod-maven

I stand by my analysis in my post referenced above regarding the complicated dynamics of such projects and their interplay with technology firms such as Google.

However, after viewing this entire Project Maven document, I have a gut feeling that long-term participation in this project will not turn out well for Google overall.

To be sure, there will likely be financial gains related to resources provided to DoD for this project — but at the cost of how much good will inside the company among employees, and in terms of potentially negative impacts on the firm’s public image overall?

Certainly the argument could be made that it’s better that a firm with an excellent ethical track record like Google participate in such projects, rather than only traditional defense contractors — some of whom have a long history of profiting from wars with little or no regard for ethical considerations.

But over the years I’ve seen good guys get trapped by that kind of logic, and once deeply immersed in the battlefield military-industrial complex it can be difficult to ever extricate yourself, irrespective of good intentions.

Thankfully from my standpoint, this isn’t a decision that I have to make. But while I don’t claim to have a functional crystal ball, I’ve been around long enough that my gut impressions regarding situations like this have a pretty good track record.

I sincerely hope that Google can successfully find its way through this potential minefield. For a great company like Google with so many great employees, it would be a tragedy indeed if issues like those related to Project Maven did serious damage to Google and to relationships with Googlers going forward.

–Lauren–

Calls for a Google Ombudsman — from Nine Years Ago!

Back in 2009, “Techdirt” posted “Does Google Need An Ombudsman?” – https://www.techdirt.com/articles/20090302/0125093942.shtml — excerpted below. Here we are nine years later, and that need is demonstrably far greater now! “Techdirt” back then was referring to some of my earliest of what would ultimately be many posts about this topic.

– – – – – – – –

Lauren Weinstein has an interesting discussion going on his blog, noting a series of recent incidents where Google has done a spectacularly poor job in communicating with the public — something I’ve been critical of the company about, as well. The company can be notoriously secretive at times, even when being a lot more transparent would help. Even worse, the company is quite difficult to contact on many issues, unless you happen to know people there already. Its response times, if you go through the “official channels,” are absolutely ridiculous (if they respond at all). Weinstein’s suggestion, then, is that Google should set up a small team to play an ombudsman role — basically acting as the public’s “representative” within the company …
         —  Mike Masnick – “Techdirt” – March 3, 2009

 – – – – – – – –

–Lauren–

I Join EFF in Opposing the California SB 1001 “Bots Disclosure” Legislation

The Electronic Frontier Foundation recently announced their opposition to California Senate Bill SB 1001, which mandates explicit “I am not a human” disclosure notices relating to all manner of automated reply, response, and other computer-based systems.

While it’s certainly the case that considerable controversy was triggered by Google’s demonstration earlier this month of their AI-based “Duplex” phone calling system ( “Teachable Moment: How Outrage Over Google’s AI ‘Duplex’ Could Have Been Avoided” – https://lauren.vortex.com/2018/05/11/teachable-moment-how-outrage-over-googles-ai-duplex-could-have-been-avoided), Google reacted quickly and appropriately by announcing that production versions of the system would identify themselves to called parties.

Voluntary approaches like this are almost always preferable to legislative “fixes” — the latter all too often attempt to swat flies using nuclear bombs, with all manner of negative collateral damage.

Such is the case with the California Senate’s SB 1001, which would impose distracting, confusing, and disruptive labeling requirements on a vast range of online systems of all sorts, the overwhelming majority of which are obviously not pretending to be human beings in misleading ways.

Even worse, the legislation states that these systems are assumed to purposely be attempting to mislead unless they explicitly identify themselves as being non-humans. This is a ludicrous assumption — the legislation would be at least a bit more palatable if it was restricted to situations where a genuine intent to mislead was present, such as automated telemarketing phone spam.

The labeling requirements imposed by SB 1001 would make the obnoxious scourge of “We use cookies! Click here if you understand!” banners (the result of misguided EU regulatory actions) look like a walk in the park by comparison.

While automated communications systems will not be immune to misuse, SB 1001 will not stop such abuse and will cause massive confusion for both site operators and users. It is not only premature, it is a textbook example of overly broad and badly written legislation that was not adequately thought through.

SB 1001 should not become law.

–Lauren–

Android In-App Payments Abuse Nightmares: Why Google Is Complicit

UPDATE (May 26, 2018): To be clear about this, I would so much prefer that Google had an Ombudsman, Ombudsman team, or similar set of roles internally, to deal with situations as described in this updated post. While I’m glad to try help when I can, and I greatly thank Google for their quick response in this case and the issuing of a full refund to this Android user, it shouldn’t require public actions from someone on the outside of Google like me to drive the appropriate resolution of such cases.

UPDATE (May 25, 2018): I’ve just been informed that a full refund has now been issued in the case I discussed in my post below from yesterday. I hope that the general class of issues described therein, especially the presence of expensive in-app “virtual” purchases targeted at children — and the specific operations of Android parental control mechanisms — will still be addressed going forward. In the meantime, my great thanks to Google for quickly doing the right thing in this case of a (now very happy) Android user and her child. 

– – –

Should an Android app aimed at children include a $133 in-app purchase for worthless virtual merchandise? If you’re the kind of crook who runs fake real estate “universities” and stiffs your workers via multiple bankruptcies, you’d probably see nothing wrong with this. But most ethical people might wonder why Google would permit such an abomination. Is the fact that they take a substantial cut of each such transaction clouding their usually excellent ethical sensibilities in this case? Or is Google somehow just unaware, underestimating, or de-emphasizing the scope of these problems?

Complaints regarding in-app Android purchases arrive at my inbox with notable regularity. But one that arrived recently really grabbed my attention. Rather than attempt to summarize it, I’m including extended portions of it below (anonymized and with the permission of the authors).

Beyond the details of how parental locks and Google Play Store payment systems are designed and the ways in which they could be greatly improved, a much more fundamental problem is at the core of these issues. 

I have long considered in-app purchase models to be open for enormous abuse. Where they are used to “unlock” actual capabilities in non-gaming applications, they can play a useful role. But their use for the purchase of worthless “virtual” goods or points in games, especially when total purchases over the lifetimes of the games can add up to more than a few dollars, are difficult to justify. They are impossible to justify in games that are targeted at children. 

Though apparently entirely legal, it is unconscionable that Google permits these sorts of apps to exploit children and their parents, and then refuses to offer full refunds to parents who have been victimized as a result, particularly when those parents have attempted to diligently use the payment control mechanisms that are currently available.

Not Googley at all. Shame on you, Google.

–Lauren–

 – – – – – –

Hi Lauren,

Thanks so much for considering this. is:  @gmail.com  – she’s fine with you sharing that with Google.

If it can happen to someone of her education what hope do the rest of us have… let alone a 4-year old who can’t read. She says also it’s fine to share her story, fully anonymised … It’s pretty horrible and I suspect also pretty widespread too….

On 05/23 09:16, wrote:

hi Lauren,

I’m sure you’ve heard lots of these kinda stories, so your indulgence is requested. Friend of mine – who holds a doctorate in business, no less – got a bill for around GBP 650 after her 4-year old daughter was able to buy in-game despite parental locks. Or, that’s what my friend thought: Google said that updating the unit could wipe out those locks. And no refund is thus forthcoming. She has contacted the app developers too but obviously they’re happy enough with her money so nothing doing there.

Two things:

(1) Why does an update clear locks? This is surely bad practice?

(2) How the hell can anyone justify a GBP 100 in-app purchase in a game directed at toddlers? This one can’t read yet and as we know, kids are experts on using touchscreen tech before any language skills develop.

P.S. any advice welcome – thanks loads

– – –

My 4 year old loves watching . On (Freeview) one of her favourite cartoons is . She loves this so much that she asked if she could download on my mobile phone to play. I obliged and made the usual checks; no ads, and parental locks engaged. She then asked to download another similar game; . She absolutely loves this game, and for a 4 year old, she’s got pretty good at it… certainly better than me and her big sis.

Again, I made sure parental lock and no ads were ticked within the app…. Last Friday I received a telephone call from the Fraud dept. at  Credit Card, they suspected fraudulent activity on my card – in fact one transaction of GBP 99.99 and another of GBP 1.99 had gone on my card that morning.. and I hadn’t even left my house. I was obviously shocked and concerned – they said the payee was Google Play.

They asked if I had an android phone and whether I let me kids play on the phone. I said yes, but all games are ‘locked down’ so to speak. She asked me to go into my phone to check… to my sheer horror, I saw a long list of  ‘in-app’ purchases made by my 4 year old within the space (mostly) of three weeks. Now I usually check my credit card spends at the end of every month, and I hadn’t got around to checking for this month. I quickly toted up the separate transactions and figured that she had burned GBP 498.88  buying ‘.. GBP 99.99 and ‘ 1.99/ 29.99’ within the game.

I was totally in shock and rightly upset. Of course this wasn’t her fault  – she can’t read.. but how can an app associated with a children’s cartoon think its OK to embed in app purchases within their game … Google have informed me that updating my android can wipe out all the parental locks etc, and I have to check/ re-engage all locks etc after EVERY software upgrade. I contacted Google, and they have disappointingly refunded only GBP 70.00 – stating that its outside their T&Cs and that I need to request a refund from ; the App developers.

I’ve emailed , they haven’t bothered to respond (I’ve waited 72 hours and counting now) . I’ve also contacted Credit Card, and they’ve said that they won’t help me… Surely this is ‘Soft Fraud’ and this is unethical and wrong… so parents please beware. This has and still is really upsetting for both me and my daughter. Please share and just be hyper careful on your phones. Here is most of her spending spree!! 

– – – – – –

Using Google’s Daydream VR Headset for Augmented Reality and Positional Tracking Applications

UPDATE (September 21, 2018): You can experiment with these capabilities using the procedures and apps available here: https://github.com/ChristophGeske/ARCoreInsideOutTrackingGearVr

– – –

When paired with suitable higher-end Google, Samsung, or various other brands of smartphones, the Google Daydream VR headset (currently in its second generation “2017” version, which is the version I’m discussing in this post) offers an extremely inexpensive path for “virtual reality” and other related experiences and experiments (the headset sometimes goes on sale for as little as $50).

In addition to of course being able to display Daydream-compatible VR apps, when a suitable Samsung phone is used it is also possible (via an interesting sequence of actions) to use many Oculus/Samsung Gear VR headset apps with the Daydream headset as well (feel free to contact me if you’re interested in the details on this).

At first glance (no pun intended) one would assume that Daydream headsets are unsuitable for “augmented reality” VR applications that require use of the phone camera, since the Daydream flap that holds the phone in place completely blocks the back of the phone and the camera lens.

This also seemingly eliminates the possibility of Daydream headset experimentation with “inside-out” 6DOF (six degrees of freedom) positional tracking applications, which could otherwise leverage the phone’s camera and Google’s “ARCore” platform to provide these capabilities that conventionally have only been available with far more expensive VR headsets.

We might consider cutting a hole through the rather thick flap of the headset (which also includes an integral heat sink — important when the flap is closed), but that’s messy at best, risks accidentally damaging embedded NFC tags, and is dependent on the exact position of the camera lens for any specific phone.

So here’s my alternative that requires zero modification of the Daydream headset itself, and only a few simple parts to achieve — an elastic strap to hold the phone in place with the flap of the headset left open and the phone camera lens exposed for use. The completed strap is simple to install or remove from the headset at any time, since the flap can be pulled outward to create a gap for this purpose.

To view a set of photos showing the assembly sequence and the finished design, please visit:

https://lauren.vortex.com/daydream-mods

I used a piece of elastic that already had a plastic catch on the end of suitable size to hold the elastic in place under the flap hinge. Alternatively almost anything of similar dimensions could be attached to a strip of elastic to achieve the same result.

You simply slide the completed assembly between the flap of the headset and the main part of the headset, strap in the phone, and you’re ready to go. I originally tested this using a metal washer, but decided that even wrapped in tape there was some risk of scratching the phone. A better protected metal washer would probably be fine. I printed up a custom-sized plastic washer to use instead.

The elastic holds the phone in place quite snugly, though with enough violent head motion it might be possible to force the phone to slide out from under the elastic. It should be straightforward to slip little barriers on the sides to avoid this, or simply avoid violent head motions! Also keep in mind that you don’t want to apply significant downward pressure to the open flap, since that would risk potentially breaking the plastic supports that keep it from falling further open.

Anyway, it’s really just the elastic, the washer, and several small cable ties!

OK, it’s a hack. No apologies.

If you have any questions, please let me know!

And of course, be seeing you.

–Lauren–

Google’s Lightning-Fast Response to My “Trusted Contacts” Concerns


Very recently I discussed my concerns regarding several issues related to Google’s “Trusted Contacts” service. Trusted Contacts permits users to send their current location data to other users as notifications.

The situation was triggered when I suddenly began receiving such location data notifications from somebody I’d never heard of in Africa. Address typos? Trying to attract my attention for some other reason? I dunno.

But stopping those emailed notifications was easier said than done, because it turned out that there was no way to do so from a web page, and the only available mechanism to block them was usable only via the Trusted Contacts smartphone app that needed to be installed, which required enabling of Location History which I don’t ordinarily use. After installing the app (which I had no personal interest in installing otherwise, and which of course a person without a suitable smartphone would not have been able to do) I was ultimately able to stop the notifications. Not a good user experience.

Since then, I’ve already been contacted directly by Google’s privacy and maps teams about these issues, and they’ve now implemented the means for users to easily unsubscribe from such notifications via a web page — without the need for installing an app. Other very useful changes related to the issues that I identified are apparently in the pipeline for availability.

My great thanks to the Google teams involved for so rapidly reaching out to me regarding these matters, and especially for the ultra-quick implementation of the web-based Trusted Contacts notifications unsubscribe tool that is now available to desktop users!

–Lauren–

Google Predictably Makes a Confused Jumble Out of New YouTube, Music Offerings

An old saying suggests that the only inevitabilities are death and taxes.  When it comes to Google, there are a couple more that we can add. Google will likely always have an array of often incompatible and overlapping “chat” programs and systems — and their paid video and music offerings will be a maze of twisty passages, all different.

Google hasn’t disappointed in that respect with the manner in which word has gotten out about their latest paid content changes. The one thing that seems clear is that the brand “YouTube Red” is apparently going away. But after that, everything is about as easy to understand as hieroglyphics prior to the discovery of the Rosetta Stone. 

YouTube Premium, YouTube Music. YouTube Music Premium. And what of Google Play Music (for free, purchased, and uploaded music) — which Google in their tweets (trying to calm down confused onlookers on Twitter) says is continuing for now?

I tried to figure it all out last night and got a terrible headache that forced me to quit. This morning, it’s all as clear as mud.

There are a couple of things that I’m fairly sure about. At the moment I’m in Google’s “family plan” for $15/month that gives me both YouTube Red and Google Play Music paid services for up to six accounts. I use it mainly for ad-free YouTube viewing and to be able to simultaneously stream different music to different Google Home devices without conflict, from music sources on Play Music and YouTube.

I’ve been led to believe that for existing subscribers of these services under their new names, there are no immediate price changes — though likely that’s coming down the line. It appears that obtaining the same mix of content under Google’s new plans will cost new subscribers more (though they may be able to lock in current prices for a time if they subscribe to the existing plans before the new plans launch reportedly next week).

But how much more will the new services cost going forward? Perhaps the Sphinx could figure it all out. I’ve seen so many different numbers and combinations of services now — not to mention that the future and form of Play Music still seems up in the air — that the only thing seeming certain is uncertainty itself.

I do know that for essentially the same paid mix of video and music content that I receive now from Google, I’d personally probably be willing to pay a wee bit more. But not much more and/or for a more limited set of content. In such latter eventualities, I’d be tempted to drop all of these Google paid content services entirely.

For the moment though, I think that I will sit tight for a bit, and wait for some sort of clarity to hopefully eventually shine its light on this current but predictable Google communications confusion.

Isn’t it nice to have a hobby?

–Lauren–

The Amazing 360 VR “Scoring the Last Jedi” Video


I just watched “Scoring The Last Jedi: a 360 VR Experience” – https://youtu.be/b32Q1nTgHM0 – for the first time, via a Google Daydream VR headset. It’s absolutely stunning, especially if you’re a lifetime fan of film scoring as I am.

Frankly, I was smiling like an idiot through the entire video. Put aside the flight simulators and the games for a moment — if you have the VR hardware to watch this baby (Google Cardboard will work fine too, if your smartphone has a good gyro), it demonstrates — better than anything else I’ve seen so far — what the potential is for VR to transport you almost physically to a different time and place.

Be warned, just watching this on YouTube without a VR headset is like an ant compared with a skyscraper. You really must see it in VR to properly experience this video.

As I’ve said previously, this kind of tech will ultimately either save civilization — or destroy it. It’s really that important.

Trust me on this.

–Lauren–

Warning: Dangerous Fake Emails About Google Privacy Changes

If you use much of anything Google, by now you’ve likely gotten at least one email from Google noting various privacy-related changes. They typically have the Subject:

Improvements to our Privacy Policy and Privacy Controls

and tend to arrive not from the expected simple “google.com” domain, but often from unusual-appearing Google subdomains, for example with addresses like:

privacy-noreply@www3.l.google.com

The notice also includes a bunch of links to various relevant privacy pages and/or systems at Google.

All of this is in advance of the effective date for the European Union’s “GDPR” laws. If you’re not familiar with the GDPR, it’s basically the latest hypocritical move by the EU on their relentless march toward dictating the control of personal data globally and to further their demands to become a global censorship czar — with the ability to demand the deletion of any search engine results around the world that they find inconvenient. Joseph Stalin would heartily approve.

One can assume that Google’s privacy team has been putting in yeoman’s service to meet the EU’s dictatorial demands, and it’s logical that Google decided to make other changes in their privacy ecosystem at the same time, and now is informing users about those changes.

Unfortunately, phishing crooks are apparently already taking advantage of this situation — in particular several aspects of these Google notification emails.

First, the legitimate Google privacy emails going out recently and currently are a veritable flood. It appears that Google is sending these out to virtually every email address ever associated with any Google account since perhaps the dawn of time. I’ve already received approximately 1.3E9 of them. OK, not really that many, but it FEELS like that many.

Some of these are coming in to addresses that I don’t even recognize. This morning one showed up to such a strange address that I had to go digging in my alias databases to figure out what it actually was. It turned out to be so ancient that cobwebs flew out of my screen at me when I accessed its database entry.

Seriously, these are one hell of a lot of emails, and the fact that they may come from somewhat unusual looking google subdomains plus include links has made them fodder for the crooks.

You can guess what’s happening. Phishing and other criminal types are sending out fraudulent emails that superficially appear to be the same as these legit Google privacy policy notification emails. Of course, some or all of the links in the phishing emails lead not to Google but to various evil traps and personal data stealing tricks.

So please, be extraordinarily careful when you receive what appear to be these privacy notices from Google. With so many real ones going out — with multiples often ending up at the same individual via various redirects and forwarding addresses — it’s easy for fake versions to slip in among the real ones, and clicking on the links in the crooked ones or opening attachments that they include can seriously ruin your day, to say the very least.

Take care, all.

–Lauren–

Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

We find ourselves at a “teachable moment” in the history of Artificial Intelligence — we should not squander this opportunity. The global controversy that erupted over the last few days regarding Google’s AI-based  “Duplex” phone calling system can be viewed as a harbinger of things to come if a holistic approach to AI is not a fundamental design factor from the ground up.

The Duplex controversy should be calmed down at least for the moment. Google has now announced that calls made by Duplex will be identified as such to the called party, exactly what I had urged in: “Calls From Google’s ‘Duplex’ System Should Include Initial Warning Announcements” (https://lauren.vortex.com/2018/05/09/calls-from-googles-duplex-system-should-include-initial-warning-announcements).

While there are some observers asserting that Duplex-type technology should be banned — or required to use a “robotic” sounding voice not easily confused with a human — I consider both of those suggestions to be extreme, unnecessary, and counterproductive. This kind of technology can have a range of positive applications. I am very much a supporter of AI research (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

We want the voices to be as humanlike as possible to be as understandable as possible. Full disclosure that the calls are from AI-based Assistants is completely adequate to assuage most related concerns, though how this tech might potentially be abused by users in ways that makes the robocalling flood even worse is still an open question. 

How did we get here? Basically, while Google’s blog post regarding Duplex made a vague mention of transparency, their demos of the system played at Google I/O appeared to show called parties with absolutely no idea that they were talking to an AI. That’s mightily impressive as a showcase for Google’s AI advances.

But it was also immediately obvious to me — as soon as I heard those demos — that this was going to quickly blow up into a big, public mess that could have been easily avoided.  Because there was bound to be an emotional, visceral, negative reaction by many observers to hear a human “manipulated” in that manner by an AI system. It strikes to very heart of many persons’ fears of “intelligent” machines — ranging from sci-fi fantasies to legitimate real world concerns.

All Google needed to do to avoid this entire PR nightmare was to announce at I/O that the system would disclose itself to called parties, and play demos that included such disclosures.

Why didn’t Google do this? I don’t have any inside scoop, but I can make some fairly informed guesses.

Google still considers Duplex to be an experiment. That’s a valid point, but only takes us so far. If you’re only showing the tech internally, that one thing. But as soon as you make a public demo, you’ve lit the fuse, so to speak.

Several reporters who called me about this story asked me variations of the same question: “How come Google didn’t realize how much negative reaction there would be to those demos?” 

And my answer is that while Google’s heart is almost always in the right place when it comes technologies, sometimes they get a bit, shall we say, “overenthusiastic” — which can make it more difficult to anticipate the non-engineering aspects of a system and how it will be perceived.

This is not just a Google issue. It’s a endemic problem at many engineering-oriented firms, and as a techie myself I’ve had to push back sometimes against my own overly enthusiastic feelings about some new technologies.

Over the years, I’ve numerous times been in the position of trying to point out these kinds of problems on the Net and elsewhere. It’s a tough sell. We engineering types can be tightly focused on solving engineering problems, and we don’t like being told that perhaps we should broaden our focus a bit. And then there’s the classic Silicon Valley attitude of pushing ahead whenever possible and just apologizing later when things go wrong. 

I’m concerned that these are no longer viable strategies — or at least that they’re ever more risky ones in our toxic political environment, where regulators and politicians increasingly see reason and technology as enemies to attack for their own political gains.

Firms like Google and so many others should be subjecting their product launches to more than just engineering and privacy reviews. We’ve reached a stage where some sort of what we might call “ethical” reviews are needed as well and just as routinely. There are many different ways that these could be accomplished, and I won’t detail them here right now.

But without these kinds of formal review mechanisms to consider the ethical impacts of these technologies, the probability of public and political blowback against both potentially very beneficial AI and other socially positive tech projects will be dramatically increasing.

–Lauren–

Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 10, 2018): Reacting to the controversies surrounding Duplex, Google today announced that the system will identify itself to called parties. I applaud this decision.

– – –

I’ve already received many reactions to my posting yesterday of “Google’s New ‘Duplex’ Voice System Creates Some Troublesome Questions” (https://lauren.vortex.com/2018/05/08/googles-new-duplex-voice-system-creates-some-troublesome-questions) — discussing Google’s new system capable of conducting telephone conversations that may be indistinguishable from human callers. This is fascinating work on Google’s part and a potentially very positive use of their AI assets and expertise, but as I noted also carries significant risks for confusion and potential abuse.

With no exceptions so far, the sense of these reactions has confirmed what I suspected — that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line.  To use the vernacular: “Don’t try to con me, bro!”

Luckily, there’s a relatively simple way to fix this problem at this early stage — well before it becomes a big issue impacting many lives.

I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google’s Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being — the exact wording of that announcement to be determined.

I would also assert that any similar systems from other firms that can be easily confused with human callers provide similar start of call announcements.

I’d enormously prefer that all affected firms agree to provide these warnings voluntarily, but would support carefully drafted legislation or other regulatory actions to require their use if a voluntary approach proves to be insufficient.

Getting ahead of the potential pitfalls in this technology is crucial to help assure that these systems are widely accepted and able to achieve their intended beneficial results, rather than turning into a painful policy flashpoint down the line.

–Lauren–

Google’s New “Duplex” Voice System Creates Some Troublesome Questions

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 9, 2018): Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

– – –

Google today demonstrated a project of theirs under development called “Duplex” — which permits their Google Assistant infrastructure to conduct unassisted two-way voice phone calls for tasks like making appointments. In their demos, employees at the firms being called apparently had no idea that they were talking to a machine. Very impressive, indeed.

But though I’m personally a fan of Assistant — I have several Assistant “Home” devices myself, plus Assistant on my phones — something about those demos today made me immediately uneasy. When I mentioned this on my mailing lists during the day, I was surprised by how many people responded with variations of “I’d be upset if I was conned into thinking that a real person was calling me when it was really a computer.”

And yeah, it would bug me too. I’m not completely sure why. I suspect it’s an aspect of human nature — and Google tends to occasionally have something of a blind spot in that respect. My guess is that most people don’t much mind talking to machines so long as they know that they’re talking to machines. But Duplex even throws in natural sounding “uh” utterances and such — technically perhaps sometimes to cover processing delays, but there’s no denying that humans would tend to perceive these as direct attempts to convince them that a human was at the other end of the line.

There was something else. I knew that I had seen something like this before, in an old sci-fi movie. I couldn’t find references on Google that matched, so I had to use my own unaided brain for the lookup. I finally got it.

In the 1977 film “Demon Seed” a woman is held captive in her own home by an advanced AI system created by her estranged husband, and in control of all the smart windows, doors, and other appliances of the house. Her husband isn’t present. He has no idea that this is happening. At various points in the plot, this system makes video phone calls and interacts with persons at the front door via an intercom, in all cases convincing them that they were actually speaking with the woman herself and that all was well.

Leaving aside the sci-fi melodrama, there are some interesting questions here. Is it ethical for computerized systems to interact with humans in a manner that pretends to be human? Even if it’s ethical, is it desirable? What are the possible blowbacks and undesirable possibilities?

Perhaps we can ask the same sort of question traditionally asked of undercover cops: “Are you a police officer?” In our case, we wonder how the system would respond if the called party asked Duplex “Are you a human being?”

Of course it’s relatively early in the development of this tech. But perhaps not too early to begin thinking about these questions. Google itself suggested that an appropriate level of “transparency” would be needed for these systems — I wholeheartedly agree. But what does that actually mean? Perhaps an announcement at the start of each call informing the called party that there were talking to an automated system? The various implications are decidedly nontrivial.

And we must also be thinking about how such systems could be purposely misused. Obviously not in the dramatic manner of that film I mentioned above, but in other ways that involve scams and frauds of various kinds, perhaps as a sort of evil evolution of the current robocalling scourge.

Yet technology itself is never actually good or evil — it’s how we choose to use tech that sets these parameters. And there are all sorts of ways that Duplex could do a lot of good — if it is deployed in manners that help people without trying to fool them, and if it has sufficient safeguards to minimize the risks of abusive applications.

Google has their work cut out for them on this one, because Duplex is a quintessential example of an AI system where getting it working is only half the battle. The other half is assuring to the greatest degree possible that it’s only used for good purposes, and not in evil ways.

–Lauren–

Phishing Risks in Gmail’s New “Confidential Mode”


In my previous post — “New Gmail Features That May Be ILLEGAL for You to Use!” (https://lauren.vortex.com/2018/05/05/new-gmail-features-that-may-be-illegal-for-you-to-use), I noted that I’m very pleased overall with Google’s new Gmail, but also explained how using specific features of new Gmail — in particular “confidential mode” — may have negative legal implications for some users, especially those with legally-mandated data retention requirements.

But there’s another potential issue with “confidential mode” that could be a much broader risk to many more persons — the possibility that this mode will be leveraged by crooks for a new wave of phishing emails designed to spread malware and steal personal information.

The problem arises since non-Gmail users cannot directly receive Gmail confidential mode messages. Instead — as reported in the article that I linked to in my previous post — when a Gmail user wants to send a non-Gmail user such a message, the non-Gmail user is instead sent a link, that when clicked takes them to Google’s servers where they can read the confidential mode message in their browser.

The potential risks for any service that operates in this way are obvious.

Those of us working on Internet security and privacy have literally spent many years attempting to train users to avoid clicking on “to read the message, click here” links in emails that they receive. Criminals have simply become too adept at creating fraudulent emails that lead to phishing and malware sites.

Our efforts have had only limited success. Millions of busy users even click on such fake links in messages that are not even particularly “well crafted” or seem “obviously” fake as viewed by those of us who have long been dealing with such issues.

Any email system that depends on users clicking on emailed links to access remotely hosted email messages is likely to be subject to such criminal abuses. No matter how much diligence Google employs to try avoid users being fooled by phishing messages masquerading as links to Gmail confidential mode messages, we can be reasonably sure that this could ultimately represent a significant new vector of attack for phishing operations that will successfully ensnare large numbers of Internet users over time.

Given the relatively poor success that educational and message authentication efforts have had to date in these respects, there are no obvious solutions immediately apparent if Google operates their Gmail confidential mode in the manner described.

–Lauren–

New Gmail Features That May Be ILLEGAL for You to Use!

UPDATE (May 7, 2018): Phishing Risks in Gmail’s New “Confidential Mode”

– – –

I’m a fan of Google’s new Gmail — after switching over I’ve never felt the need to switch back to classic Gmail, not even once ( “My Initial Impressions of Google’s New Gmail User Interface” – https://lauren.vortex.com/2018/04/25/my-initial-impressions-of-googles-new-gmail-user-interface).

An aspect of the new Gmail that has been getting a lot of attention relates to the various features associated with Gmail’s new “confidential” mode, which includes mechanisms that can enable some user-specified limits (with various provisos regarding their effectiveness in assorted scenarios) on how Gmail messages are handled locally and by recipients, including a kind of “email expiration” system, among other features. A good summary of confidential mode functions is at: https://mashable.com/2018/04/27/new-gmail-expiring-emails-confidential-mode on the “Mashable” site.

The features of confidential mode come with various positive attributes and a number of limitations, but for some users may also be something else — they may be ILLEGAL for them to actually use!

It doesn’t take rocket science to see why.

Many people who use Gmail (and/or correspond with persons who use Gmail) work in fields or for organizations that have legally-mandated data retention requirements — and these often include all email communications.

And it’s important to note that we’re not talking only about the Gmail users themselves. A non-Gmail user receiving a confidential mode email from a Gmail user would apparently only receive a link to the actual message on Google’s servers — and that message could vanish later depending on the Gmail user’s settings (the recipients would have no control over this). Yeah, users could always screenshot or photograph these messages, but the legal implications of suddenly vanishing messages are not necessarily particularly subtle ones.

While Gmail users in most instances have always had the ability to delete their own copies of emails, the ability to suddenly withdraw access to a sent email by a recipient on both Gmail or non-Gmail systems will be new to most users, and the complex ramifications of using such features may not be obvious to many of them.

It is imperative that both users of regular Gmail and Google’s paid G Suite offerings are adequately warned by Google regarding the potential legal pitfalls of using confidential mode. I’d suggest “in your face” warnings at least the first time that users attempt to use associated features.

Many independent professionals who may be subject to legal restrictions in these contexts use regular Gmail. The situation is even more complicated with G Suite. Its users include both businesses large and small, and all manner of other organizations including significant numbers of government users — any of whom may be using their own domain names for G Suite emails.

It will be critical that G Suite users and account administrators aren’t only directly warned about the implications of using these confidential mode features, but that administrators also have the means to restrict the use of these features by any or all of the users associated with those accounts.

This is definitely an area where proactive educational and technical steps by Google now could potentially avoid a lot of hassle — or much worse — for a significant number of their users down the line.

–Lauren–

Confused Google Announcements Regarding Retirement of Google+ Apps?


I’m a heavy user of Google+. I’ve never been a Facebook user, and I’ve been on Google+ since the first day of beta availability back in 2011. So naturally many G+ users contact me when they have relevant concerns.

I’m now getting emails and other messages from concerned, alarmed, and possibly confused G+ users — and I’m seeing public posts and articles that may be similarly confused — about a series of emailed warnings that G is sending out about G+ apps. I’ve also received those emailed warnings from Google directly. There’s a lot of chatter that these messages are signaling a step toward killing off G+ (which given recent updates to G+ overall, seems unlikely to me at this time). The relevant portion of the emails is worded like this:

Effective May 20, 2018, Google+ for Android version 7.5 and older and Google+ for iOS version 5.0 and older will no longer work. Note that if you chose to have the photo backup setting enabled in Google+, your photos will no longer be backed up.

Many people are assuming that this means the G+ app will stop working for Android versions 7.5 and below. But this doesn’t make any sense to me. I’ve never even seen an Android version 7.5 system. And as of February this year, the breakdown of Android usage is reportedly around 81.2% for Android versions 5.0->7.1.x.

I’ve personally been assuming that these messages actually are referring to old versions of the G+ app, not of Android versions, but the wording is definitely unclear. I can’t believe that G actually would kill the G+ app for the vast majority of Android users.

There’s definitely growing public confusion about this, and clarification from Google — one way or another — would be greatly appreciated.

–Lauren–

Warning: New European Privacy Law Has Become a Jackpot for Internet Crooks


A quick but important warning. As you may know, a new misguided European Union privacy-related regulation — the GDPR (General Data Protection Regulation) — is coming into force.

This has triggered many websites sending out confusing notices about new GDPR rules, demands that users click-through URLs to accept the new regulations, and a range of other complicated and confusing notifications.

I’m now receiving reports of highly deceptive scam phishing emails that falsely purport to be these kinds of notifications, often including URLs “requiring” you to “accept” these regulations or make other changes on your accounts to supposedly avoid disruption of services.

The crooks have had plenty of time to prepare for this, so many of these scamming emails are extremely well designed. It can be nearly impossible to determine if some of them are legitimate or not, even with careful inspection of the email headers available to most users.

I strongly recommend that unless you are certain that an email asking or demanding that you take particular actions relating to new privacy rules is legitimate, that you do not click on any contained links, nor open any included attachments.

If the message appears to have come from an entity with whom you regularly deal — and remember that just because an email claims to come from a firm or other organization that you happen to work with, doesn’t mean that the message is actually from them! — you should consider contacting that firm or organization for verification via a known email address or known URL from your own bookmarks — or a URL that you type in yourself. Or call them on the phone, using their public phone numbers (not a phone number from the email). Again, do not click any links in those messages unless you are entirely convinced that they are not fakes!

What a shame that European regulators apparently didn’t feel it necessary to notify users about how easily the GDPR could become an entirely new vector for crime.

–Lauren–

How AI Could Save Us All

“And pray that there’s intelligent life somewhere up in space, ’cause there’s bugger all down here on Earth.” —
“The Galaxy Song” (“Monty Python’s The Meaning of Life” – 1983).

– – –

It’s very popular to trash “artificial intelligence” (AI) these days.

While reasoned warnings regarding how AI-based systems could be abused (and/or generate inappropriately “biased” decisions) are appropriate, various folks in the public eye — some of whom really should know better — have been proclaiming nightmare scenarios of AI relegating we mere humans to the status of pets, slaves, or worse — perhaps “batteries” as in “The Matrix” (1999). Or maybe just fertilizer for decorative displays.

There’s certainly a long history of cinematic representations of “intelligent” systems run amok. Earlier than “The Matrix” we saw a computer lethally hijack a space mission (“Hal” in “2001: A Space Odyssey” – 1968); another computer imprison, rape, and impregnate a woman (“Proteus” in “Demon Seed” – 1977); and a pair of computing systems take over the world (“Colossus” and “Guardian” in “Colossus: The Forbin Project” – 1970). And of course there’s the scorched Earth world of “Skynet” in “Terminator” (1984), and hybrid threats that may be even scarier, like the “Borg” from the “Star Trek” universe. And more, many more.

All of these cultural references have a real impact on how we think about AI today. We’re predisposed to be fearful of systems that we believe might be “smarter” in some ways than we are.

Monty Python may have been partly correct all along. In a world where a moronic creature like Donald Trump can be elected to the most powerful role on the planet, we should probably be seeking out intelligence to augment our own, wherever we can find it.

Seriously, since it could be a long, long time (if ever) before we hear from interstellar civilizations (and Stephen Hawking’s prediction that this might be a seriously losing proposition for humanity could indeed be accurate), we need to concentrate on intelligence augmentation systems that we can build ourselves.

The word “augmentation” is crucial here. The human brain is a marvel in so many creative and imaginative ways. But it’s easily overwhelmed by data, subject to disruptive distractions, and is ill-suited to solving critical planetary-scale problems on its own.

The key to a happy coexistence between humans and AI systems — even advanced AI systems — is to keep in sharp focus where we excel and where the AI systems that we develop can be most effectively and successfully deployed.

Two ways that we can get into trouble are by trying to use AI and “expert systems” as shortcuts to solve problems for which they aren’t actually suited, or by assuming that the data that we provide to these systems is always accurate and fair, when in some cases it’s actually biased and unfair (we’ve seen this problem already occur in some systems that attempt to predict criminal recidivism, for example). The computing adage “garbage in, garbage out” is still true today, just as it was in the ancient era of punched card computing.

Obviously, we don’t want to screw this up. There are real challenges and significant (but fascinating!) issues to be solved in the ongoing development and deployment of AI systems going forward, and in helping non-technical persons to better understand what these systems are really about and how they could actually improve their lives for the better.

And to the extent that we can concentrate on the real world of AI — and less on dramatic “doom and gloom” scenarios straight out of the movies — I believe that we’ll all be better off.

–Lauren–