Google — and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

Views: 2038

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

– – –

A few months ago, in “The Ethics of Google and the Pentagon Drones” –  https://lauren.vortex.com/2018/03/06/the-ethics-of-google-and-the-pentagon-drones – I discussed some of the complicated nuances that can come into play when firms like Google engage with military contracts that are ostensibly for defensive purposes, but potentially could lead to offensive use of artificial intelligence technologies as well. This is not a simple matter. I was myself involved with Defense Department projects many years ago (including the Internet’s ancestor ARPANET itself), as I explained in that post.

The focal point for concerns inside Google in this regard (triggering significant internal protests and some reported resignations) revolve around the U.S. Department of Defense (DoD) “Project Maven” — aimed at using A.I. technology for drone image analysis, among other possibilities.

Now a 27 page DoD presentation document regarding Maven is in circulation, and frankly it is discomforting and disturbing to view. It is officially titled:

“Disruption in UAS: The Algorithmic Warfare Cross-Functional Team (Project Maven)”

And it sends a chill down my spine precisely because it seems to treat the topic rather matter-of-factly, almost lightheartedly.

There are photos of happy surfers. The project patch features smiling, waving cartoon robots who would fit right into an old episode of “The Jetsons” — with a Latin slogan that roughly translates to “Our job is to help.” Obviously DoD has learned a lesson from that old NSA mission patch that showed an enormous octopus with its tentacles draped around the Earth.

You can see the entire document here:

https://www.vortex.com/dod-maven

I stand by my analysis in my post referenced above regarding the complicated dynamics of such projects and their interplay with technology firms such as Google.

However, after viewing this entire Project Maven document, I have a gut feeling that long-term participation in this project will not turn out well for Google overall.

To be sure, there will likely be financial gains related to resources provided to DoD for this project — but at the cost of how much good will inside the company among employees, and in terms of potentially negative impacts on the firm’s public image overall?

Certainly the argument could be made that it’s better that a firm with an excellent ethical track record like Google participate in such projects, rather than only traditional defense contractors — some of whom have a long history of profiting from wars with little or no regard for ethical considerations.

But over the years I’ve seen good guys get trapped by that kind of logic, and once deeply immersed in the battlefield military-industrial complex it can be difficult to ever extricate yourself, irrespective of good intentions.

Thankfully from my standpoint, this isn’t a decision that I have to make. But while I don’t claim to have a functional crystal ball, I’ve been around long enough that my gut impressions regarding situations like this have a pretty good track record.

I sincerely hope that Google can successfully find its way through this potential minefield. For a great company like Google with so many great employees, it would be a tragedy indeed if issues like those related to Project Maven did serious damage to Google and to relationships with Googlers going forward.

–Lauren–

Calls for a Google Ombudsman — from Nine Years Ago!

Views: 861

Back in 2009, “Techdirt” posted “Does Google Need An Ombudsman?” – https://www.techdirt.com/articles/20090302/0125093942.shtml — excerpted below. Here we are nine years later, and that need is demonstrably far greater now! “Techdirt” back then was referring to some of my earliest of what would ultimately be many posts about this topic.

– – – – – – – –

Lauren Weinstein has an interesting discussion going on his blog, noting a series of recent incidents where Google has done a spectacularly poor job in communicating with the public — something I’ve been critical of the company about, as well. The company can be notoriously secretive at times, even when being a lot more transparent would help. Even worse, the company is quite difficult to contact on many issues, unless you happen to know people there already. Its response times, if you go through the “official channels,” are absolutely ridiculous (if they respond at all). Weinstein’s suggestion, then, is that Google should set up a small team to play an ombudsman role — basically acting as the public’s “representative” within the company …
         —  Mike Masnick – “Techdirt” – March 3, 2009

 – – – – – – – –

–Lauren–

I Join EFF in Opposing the California SB 1001 “Bots Disclosure” Legislation

Views: 580

The Electronic Frontier Foundation recently announced their opposition to California Senate Bill SB 1001, which mandates explicit “I am not a human” disclosure notices relating to all manner of automated reply, response, and other computer-based systems.

While it’s certainly the case that considerable controversy was triggered by Google’s demonstration earlier this month of their AI-based “Duplex” phone calling system ( “Teachable Moment: How Outrage Over Google’s AI ‘Duplex’ Could Have Been Avoided” – https://lauren.vortex.com/2018/05/11/teachable-moment-how-outrage-over-googles-ai-duplex-could-have-been-avoided), Google reacted quickly and appropriately by announcing that production versions of the system would identify themselves to called parties.

Voluntary approaches like this are almost always preferable to legislative “fixes” — the latter all too often attempt to swat flies using nuclear bombs, with all manner of negative collateral damage.

Such is the case with the California Senate’s SB 1001, which would impose distracting, confusing, and disruptive labeling requirements on a vast range of online systems of all sorts, the overwhelming majority of which are obviously not pretending to be human beings in misleading ways.

Even worse, the legislation states that these systems are assumed to purposely be attempting to mislead unless they explicitly identify themselves as being non-humans. This is a ludicrous assumption — the legislation would be at least a bit more palatable if it was restricted to situations where a genuine intent to mislead was present, such as automated telemarketing phone spam.

The labeling requirements imposed by SB 1001 would make the obnoxious scourge of “We use cookies! Click here if you understand!” banners (the result of misguided EU regulatory actions) look like a walk in the park by comparison.

While automated communications systems will not be immune to misuse, SB 1001 will not stop such abuse and will cause massive confusion for both site operators and users. It is not only premature, it is a textbook example of overly broad and badly written legislation that was not adequately thought through.

SB 1001 should not become law.

–Lauren–

Android In-App Payments Abuse Nightmares: Why Google Is Complicit

Views: 1244

UPDATE (May 26, 2018): To be clear about this, I would so much prefer that Google had an Ombudsman, Ombudsman team, or similar set of roles internally, to deal with situations as described in this updated post. While I’m glad to try help when I can, and I greatly thank Google for their quick response in this case and the issuing of a full refund to this Android user, it shouldn’t require public actions from someone on the outside of Google like me to drive the appropriate resolution of such cases.

UPDATE (May 25, 2018): I’ve just been informed that a full refund has now been issued in the case I discussed in my post below from yesterday. I hope that the general class of issues described therein, especially the presence of expensive in-app “virtual” purchases targeted at children — and the specific operations of Android parental control mechanisms — will still be addressed going forward. In the meantime, my great thanks to Google for quickly doing the right thing in this case of a (now very happy) Android user and her child. 

– – –

Should an Android app aimed at children include a $133 in-app purchase for worthless virtual merchandise? If you’re the kind of crook who runs fake real estate “universities” and stiffs your workers via multiple bankruptcies, you’d probably see nothing wrong with this. But most ethical people might wonder why Google would permit such an abomination. Is the fact that they take a substantial cut of each such transaction clouding their usually excellent ethical sensibilities in this case? Or is Google somehow just unaware, underestimating, or de-emphasizing the scope of these problems?

Complaints regarding in-app Android purchases arrive at my inbox with notable regularity. But one that arrived recently really grabbed my attention. Rather than attempt to summarize it, I’m including extended portions of it below (anonymized and with the permission of the authors).

Beyond the details of how parental locks and Google Play Store payment systems are designed and the ways in which they could be greatly improved, a much more fundamental problem is at the core of these issues. 

I have long considered in-app purchase models to be open for enormous abuse. Where they are used to “unlock” actual capabilities in non-gaming applications, they can play a useful role. But their use for the purchase of worthless “virtual” goods or points in games, especially when total purchases over the lifetimes of the games can add up to more than a few dollars, are difficult to justify. They are impossible to justify in games that are targeted at children. 

Though apparently entirely legal, it is unconscionable that Google permits these sorts of apps to exploit children and their parents, and then refuses to offer full refunds to parents who have been victimized as a result, particularly when those parents have attempted to diligently use the payment control mechanisms that are currently available.

Not Googley at all. Shame on you, Google.

–Lauren–

 – – – – – –

Hi Lauren,

Thanks so much for considering this. is:  @gmail.com  – she’s fine with you sharing that with Google.

If it can happen to someone of her education what hope do the rest of us have… let alone a 4-year old who can’t read. She says also it’s fine to share her story, fully anonymised … It’s pretty horrible and I suspect also pretty widespread too….

On 05/23 09:16, wrote:

hi Lauren,

I’m sure you’ve heard lots of these kinda stories, so your indulgence is requested. Friend of mine – who holds a doctorate in business, no less – got a bill for around GBP 650 after her 4-year old daughter was able to buy in-game despite parental locks. Or, that’s what my friend thought: Google said that updating the unit could wipe out those locks. And no refund is thus forthcoming. She has contacted the app developers too but obviously they’re happy enough with her money so nothing doing there.

Two things:

(1) Why does an update clear locks? This is surely bad practice?

(2) How the hell can anyone justify a GBP 100 in-app purchase in a game directed at toddlers? This one can’t read yet and as we know, kids are experts on using touchscreen tech before any language skills develop.

P.S. any advice welcome – thanks loads

– – –

My 4 year old loves watching . On (Freeview) one of her favourite cartoons is . She loves this so much that she asked if she could download on my mobile phone to play. I obliged and made the usual checks; no ads, and parental locks engaged. She then asked to download another similar game; . She absolutely loves this game, and for a 4 year old, she’s got pretty good at it… certainly better than me and her big sis.

Again, I made sure parental lock and no ads were ticked within the app…. Last Friday I received a telephone call from the Fraud dept. at  Credit Card, they suspected fraudulent activity on my card – in fact one transaction of GBP 99.99 and another of GBP 1.99 had gone on my card that morning.. and I hadn’t even left my house. I was obviously shocked and concerned – they said the payee was Google Play.

They asked if I had an android phone and whether I let me kids play on the phone. I said yes, but all games are ‘locked down’ so to speak. She asked me to go into my phone to check… to my sheer horror, I saw a long list of  ‘in-app’ purchases made by my 4 year old within the space (mostly) of three weeks. Now I usually check my credit card spends at the end of every month, and I hadn’t got around to checking for this month. I quickly toted up the separate transactions and figured that she had burned GBP 498.88  buying ‘.. GBP 99.99 and ‘ 1.99/ 29.99’ within the game.

I was totally in shock and rightly upset. Of course this wasn’t her fault  – she can’t read.. but how can an app associated with a children’s cartoon think its OK to embed in app purchases within their game … Google have informed me that updating my android can wipe out all the parental locks etc, and I have to check/ re-engage all locks etc after EVERY software upgrade. I contacted Google, and they have disappointingly refunded only GBP 70.00 – stating that its outside their T&Cs and that I need to request a refund from ; the App developers.

I’ve emailed , they haven’t bothered to respond (I’ve waited 72 hours and counting now) . I’ve also contacted Credit Card, and they’ve said that they won’t help me… Surely this is ‘Soft Fraud’ and this is unethical and wrong… so parents please beware. This has and still is really upsetting for both me and my daughter. Please share and just be hyper careful on your phones. Here is most of her spending spree!! 

– – – – – –

Using Google’s Daydream VR Headset for Augmented Reality and Positional Tracking Applications

Views: 849


When paired with suitable higher-end Google, Samsung, or various other brands of smartphones, the Google Daydream VR headset (currently in its second generation “2017” version, which is the version I’m discussing in this post) offers an extremely inexpensive path for “virtual reality” and other related experiences and experiments (the headset sometimes goes on sale for as little as $50).

In addition to of course being able to display Daydream-compatible VR apps, when a suitable Samsung phone is used it is also possible (via an interesting sequence of actions) to use many Oculus/Samsung Gear VR headset apps with the Daydream headset as well (feel free to contact me if you’re interested in the details on this).

At first glance (no pun intended) one would assume that Daydream headsets are unsuitable for “augmented reality” VR applications that require use of the phone camera, since the Daydream flap that holds the phone in place completely blocks the back of the phone and the camera lens.

This also seemingly eliminates the possibility of Daydream headset experimentation with “inside-out” 6DOF (six degrees of freedom) positional tracking applications, which could otherwise leverage the phone’s camera and Google’s “ARCore” platform to provide these capabilities that conventionally have only been available with far more expensive VR headsets.

We might consider cutting a hole through the rather thick flap of the headset (which also includes an integral heat sink — important when the flap is closed), but that’s messy at best, risks accidentally damaging embedded NFC tags, and is dependent on the exact position of the camera lens for any specific phone.

So here’s my alternative that requires zero modification of the Daydream headset itself, and only a few simple parts to achieve — an elastic strap to hold the phone in place with the flap of the headset left open and the phone camera lens exposed for use. The completed strap is simple to install or remove from the headset at any time, since the flap can be pulled outward to create a gap for this purpose.

To view a set of photos showing the assembly sequence and the finished design, please visit:

https://lauren.vortex.com/daydream-mods

I used a piece of elastic that already had a plastic catch on the end of suitable size to hold the elastic in place under the flap hinge. Alternatively almost anything of similar dimensions could be attached to a strip of elastic to achieve the same result.

You simply slide the completed assembly between the flap of the headset and the main part of the headset, strap in the phone, and you’re ready to go. I originally tested this using a metal washer, but decided that even wrapped in tape there was some risk of scratching the phone. A better protected metal washer would probably be fine. I printed up a custom-sized plastic washer to use instead.

The elastic holds the phone in place quite snugly, though with enough violent head motion it might be possible to force the phone to slide out from under the elastic. It should be straightforward to slip little barriers on the sides to avoid this, or simply avoid violent head motions! Also keep in mind that you don’t want to apply significant downward pressure to the open flap, since that would risk potentially breaking the plastic supports that keep it from falling further open.

Anyway, it’s really just the elastic, the washer, and several small cable ties!

OK, it’s a hack. No apologies.

If you have any questions, please let me know!

And of course, be seeing you.

–Lauren–