How the Pentagon Is Trying to Shame Google and Googlers

I hadn’t been planning to say much more right now about Google and “Project Maven” — the Defense Department project in which Google will wisely not be renewing participation when the existing contract ends next year (https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document).

But as usual, the Pentagon just doesn’t know when to leave well enough alone, and I am very angry today to see that a Pentagon-affiliated official is attempting to “death shame” Google and its employees regarding their appropriate decision not to renew with Maven. 

This particularly upsets me because I’ve been to this rodeo before. Over the years I’ve turned down potential work — that I really could have used! — because of its direct relationship to actual battlefield operations. And in various of those cases, there were attempts made to “death shame” me as well — to tell me that if I refused to participate in those aspects of the military-industrial complex, I would be morally complicit for any potential U.S. forces deaths that might theoretically occur due to lack of my supposed expertise.

This is a technique of the military that is as old as civilization. Various technologists reaching back to the days of Mesopotamia — and likely earlier — have been asked (or been required, often under threat of death) — to provide their services for ongoing military operations.

What makes this so difficult is that typically it’s impossible to clearly separate defensive from offensive projects. As I’ve previously noted, all too often what appears to be defensive work morphs into attack systems, and in the hands of some leaders (especially lying, sociopathic ones) can easily end up extinguishing vast numbers of innocent lives.

This was explicitly acknowledged in the infuriating words earlier today by a former top U.S. Defense Department official — former Deputy Defense Secretary Robert O. Work, who initiated Project Maven:

“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves.”

He also suggested that Google was being hypocritical, because in his view their AI research cooperation with China would benefit China’s military.

His statements are textbook Pentagon doublespeak, and his assertions are not only fundamentally disingenuous, but are blatant attempts at false equivalences.

Particularly galling is his “might wind up with us taking a shot” reference, as if to say that offensive operations were merely a minor footnote in the battle plan. But when you’re dealing with operational battle data, there are no minor footnotes in this context — that data analysis will be used for offensive operations — you can count on it.

To be clear, the righteous defense of the USA is an admirable pursuit. But if one chooses to go all in with the military-industrial complex to that end, it’s at the very least a decision to be made with “eyes wide open” — not with false assumptions that your work will be purely defensive.

And for those of us who refuse to work on military projects that will ultimately be used offensively — keeping in mind the horrific missteps of presidents far less twisted and bizarre than the one currently in the Oval Office — there is absolutely no valid shame associated with that ethical decision.

There’s a critical distinction to be made between basic research and operational battle projects. It’s much the same distinction as my willing work on the DoD ARPANET project decades ago — that led directly to the Internet that you’re using right now — vs. a range of ongoing, specifically battle-oriented projects with which I refused to become associated.

This is also what gives the lie to Robert Work’s attempt to discredit Google’s AI work with China. Open AI research is like Open Source software itself — usable for good or evil, but open to all and light years away from projects primarily with battle intents.

Google and other firms — including their managements and employees — will of course need to find their own paths forward in term of what sorts of work and contracts they are willing to pursue that may involve the Department of Defense or other military-associated organizations. As we’ve seen with ARPANET, some basic research work funded by the military can indeed yield immense positive benefits to the country and the world.

Personally, I find the concept of a dividing line between such basic research — as opposed to clearly battle-oriented projects — to be a useful guide toward determining which sorts of projects meet my own ethical standards — and which ones do not. As the saying goes, your mileage may vary.

But in any case, we should all utterly ignore Robert Work’s repulsive attempt to shame Googlers and Google itself — and relegate his way of thinking to the dustbin of history where it truly belongs.

–Lauren–

How the Dominant ISPs Are Trying to Scare People Into Opposing California Net Neutrality

Are there any sordid depths to which the crooked, lying, dominant ISPs won’t go to try terrify people into opposing Net Neutrality in California? Nope, let’s face it, these firms spout outright lies as if they were Donald Trump. Yep, seriously evil, as this robocall voicemail currently in circulation so clearly demonstrates! – https://lauren.vortex.com/crooked-isps-ca-822.mp3

–Lauren–

A Modest Proposal: Identifying Europeans on the Internet for Their Protection

With European politicians and regulators continuing to churn out proposed regulations to protect their citizens from the evils of the Internet, via “The Right To Be Forgotten” — and the currently under consideration Article 11 “link tax” and Article 13 content filtering censorship proposals — it is becoming more important than ever that Internet sites around the world be able to identify European users so that they may be afforded “appropriate” treatment at those sites, including blocking from all services as necessary.

Already, some Europeans are suggesting that they will attempt to evade the restrictions that have been implemented or proposed by their beneficent and magnificent leaders. The world must band together to prevent Europeans users from pursuing such a tragic course of actions.

Obviously, all VPN usage by Europeans that attempt to obscure the European geographic locations of their source IP addresses must be banned. In fact, it would be even safer for Europeans if all usage of VPNs by Europeans were prohibited by their governments, except under extraordinary circumstances requiring government licenses and monitoring for inappropriate usage.

All web browsers used by Europeans should be required to send a special “protected European resident” flag to server sites, so that those sites may determine the appropriate blocking or other disposition of those browser requests. Use of unapproved browsers or tampering with browsers to remove this protection flag would of course be a criminal act.

We must also solve the problem of Europeans traveling outside of Europe, where they might be tempted to use public Internet access systems that do not meet the high standards of protection required by European regulations.

One possible solution to this dilemma would be to require the permanent implantation of RFID identification capsules in all Europeans who travel beyond the protected confines of Europe. Don’t worry — these need not individually identify any given person, they need only identify them as European. Scanning equipment at public computers around the planet could detect these implants and automatically apply appropriate European protection rules. Europeans would be free to travel the world with no fears of accidentally using systems that did not apply their government’s protective regulations!

This modest proposal of course only scratches the surface of the sorts of solutions that will be needed to help assure that EU citizens fully and completely abide by their governments’ benevolent actions and requirements.

But the EU and its residents can feel confident that the rest of the world’s Internet will do its part to help keep Europeans safe, secure, and law-abiding at all times!

–Lauren–

Google’s New AI Principles Are a Model for the World

In the wake of Google’s announcement that they will not be renewing their controversial “Project Maven” military AI contract when it expires next year (“Google — and the Defense Department’s Disturbing ‘Maven’ A.I. Project Presentation Document” – https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document), Google has now published a post describing their policy positions regarding AI at Google going forward: “Artificial Intelligence at Google: Our Principles” (https://www.blog.google/topics/ai/ai-principles).

Since I was on balance critical of Google’s participation in Project Maven, but am very supportive of AI overall (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all), I’ve received a bunch of queries from readers asking how I feel about Google’s newly announced AI principles statement.

“Excellent” is my single word summary, especially in terms of the principles being balanced — and above all — realistic.

AI will be a critical tool going forward, both in terms of humanity and the global ecosystem itself. And like any tool — reaching all the way back to a chunk of rock on the ground in a prehistoric cave — AI can be used for good purposes, evil purposes, and in a range of “gray area” scenarios that are more difficult to cleanly categorize one way or the other.

It’s this last set of concerns, especially AI applications with multiple uses, that I’m particularly glad to see Google addressing specifically in their principles post.

For those of us who aren’t psychopaths or sociopaths, most fundamental characteristics of good and evil are usually fairly obvious. But as one grows older, it becomes apparent that the real world is not typically made up of black and white situations where one or another set of these characteristics exist in isolation — much more often we’re dealing with a complicated kaleidoscope of interrelating issues.

So — to address one point that I’ve been most asked about over the last couple of days regarding Google’s AI statement — it is entirely appropriate that Google explicitly notes that they will not be abandoning all aspects of government and military AI work, so long as that work is not likely to cause overall harm. 

In a “perfect” world we might not need the military — hell, we might not even need governments. But this is not a perfect world, and it’s one thing to use AI as a means to kill ever more people more efficiently, and something else entirely to use AI defensively to help protect against the genuine evils that still pervade this planet, as Google says it will do.

AI is still in its relative infancy, and attempts to accurately predict its development (beyond the very short term) are likely doomed to failure. AI principles such as Google’s will always by necessity be works in progress, and Google in fact explicitly acknowledges this fact.

But ethical firms and ethical governments around the world could today do themselves, their employees, and their citizens proud by accepting and living by AI principles such as those that Google has now announced.

–Lauren–

Why We May Have to Cut Europe Off from the Internet

UPDATE (March 28, 2019): Early this week, the EU passed this horrific legislation into law. How the individual member countries of the EU will implement it is anyone’s guess — utter chaos is certain, and drastic measures by the rest of the world to protect their own Internet services and users from such EU madness will indeed likely be necessary.

UPDATE (July 5, 2018): In a rare move, the EU Parliament voted today to block this current copyright legislation, which opens it to amendments by the entire membership of Parliament, leading to a new vote this coming September. So the war against this horrific legislation is by no means over, but this is an important battle won for now.

– – –

It’s no joke. It’s not hyperbole. If the European Union continues its current course, the rest of the world may well have to consider how to effectively “cut off” Europe from the rest of the Internet — to create an “Island Europe” in an Internet communications context. 

For those of us involved with the Net since its early origins, the specter of network fragmentation has long been an outcome that we’ve sorely hoped to avoid. But continuing EU actions could create an environment where mechanisms to tightly limit Europe’s interactions with the rest of the global Internet may be necessary — not imposed with pleasure, not with vindictiveness, but for the protection of free speech around the rest of the planet.

The EU will later this month be voting on a nightmarish copyright control scheme (“Article 13”) that would impose requirements for real-time “copyright filtering” of virtually all content uploaded to major and many minor Internet sites, with no protections against trolling, and the certainty of inappropriately blocking vast quantities of public domain and other materials, with no real protections against errors and no effective avenues for appeals. Please see:

“On June 20, an EU committee will vote on an apocalyptically stupid, internet-destroying copyright proposal that’ll censor everything from Tinder profiles to Wikipedia” (https://boingboing.net/2018/06/07/thanks-axel-voss.html).

Even if this specific horrific proposal is voted down, it’s important to review how we came to this juncture, as the EU has increasingly accelerated its program to become the Internet’s global censorship czar, in ways that even countries like China and Russia haven’t attempted to date.

As far back as 2012 and earlier, in “The ‘Right to Be Forgotten’: A Threat We Dare Not Forget” (https://lauren.vortex.com/archive/000938.html), I warned of the insidious nature of content censorship schemes flowing forth from Europe, and I’ve consistently warned that — like the proverbial camel’s nose under the tent — Europe would never be satisfied with any concessions offered by Internet firms. 

Time has borne out my predictions. In ensuing years, the EU has expanded its demands until now it considers itself in key respects to be the global arbiter of what should or should not be seen by Internet users around the world. 

Like other of civilization’s information control tyrants, a taste of censorship powers by the EU has inevitably led to utter censorship gluttony, and the sense that “we know best what those stupid little people should be allowed to see” is as old as human history, long predating modern communications systems.

European citizens are of course free to elect whatever sorts of governments that they choose. If that choice is for information control tyrants whose pleasure is to victimize their own citizens, so be it.

But if Europe continues to insist that its tyranny of censorship and information control must be honored by the rest of the world, then the rest of the world will be reluctantly forced to treat Europe as an Internet pariah, and use all possible technical means to isolate Europe in manners that best protect everyone else’s freedom of Internet speech. 

–Lauren– 

When Google Blames Users for Privacy Problems


One of my favorite user interface (UI) design adages is pretty much simplicity itself:

When you blame the users, you’ve already lost the argument.

I’m reminded of this by Google’s public reactions to a recent study revealing that almost a third of nearly 10,000 sampled Google G Suite commercial customers were unwittingly exposing sensitive corporate and/or customer data to the public Internet without access protections: “Widespread Google Groups Misconfiguration Exposes Sensitive Information” (https://www.kennasecurity.com/widespread-google-groups-misconfiguration-exposes-sensitive-information/).

Without getting into the technical details here, the underlying issues relate to the multiplicity of settings that control public access to Google Groups and their associated mailing lists. While Google defaults these to their most secure settings, the sheer quantity of misconfigured, potentially information leaking sites represents an empirical proof that a very significant number of G Suite users and administrators are not adequately understanding these settings, with resulting privacy-negative impacts.

Google’s response — in essence — has been “RTFM” (Read The F‑‑‑‑‑‑ Manual): The settings are there, if you’re not using them correctly, that’s your problem, not ours!

And while Google has posted some additional related info (e.g. on their G Suite Updates Blog), those explanations mostly stand to emphasize the relative complexity of the interface, and no changes that I’m aware of have been made to the interface in response to these concerns.

The situation is a bit reminiscent of auto manufacturers who resisted redesigning key aspects of their vehicles, even as it became ever more obvious that significant numbers of drivers were having accidents due to existing design elements.

As far as I’m concerned, the scope of the reported G Suite privacy leakage problems indicates nothing less than a privacy design failure in this instance.

Rather than trying to make excuses for an existing user interface that is clearly failing significant numbers of customers (and with G Suite, we’re talking about paying customers!), Google needs to take an immediate and hard look at the specific design aspects that are enabling these misconfiguration-based confidential information exposures.

A practical fix might not even involve major changes to the UI, and might be adequately served by mechanisms as simple as more in-your-face “pop-up” warnings to users and administrators, appearing in conjunction with additional confirmation dialogues when associated privacy-sensitive settings are being altered.

But clearly, explanatory blog posts aren’t going to cut the mustard for these kinds of problems, and I urge Google’s world-class privacy team to effectively address this situation as soon as possible.

–Lauren–

Hate Speech — and Google’s Public Relations “Death Wish”

I’ve been writing publicly for a long time. Sometimes it feels like my earliest articles and posts were composed in runic alphabets inscribed on stone tablets. I’ve always had a rule that I’ve tried to abide by: “Never write when you’re angry!”

Today I’ll violate that self-imposed prohibition. I’m in a vile mood, and I’m here at the keyboard anyway.

Those of you who have followed my writings (and have still somehow managed to maintain a semblance of sanity) know that I frequently deal with Google-related issues. I started doing that shortly after Google first appeared on the Net, and here we are now almost 20 years later. 

I was pretty tough on Google back then. I was unhappy with their privacy practices at the time and some other related issues, and I was not reluctant to present my feelings about such matters. Similarly, as Google evolved over the years into a world-class example of privacy and security best practices and has done so much other good work, I’ve enthusiastically pointed out the efforts of the Google teams involved. And when I feel that Google has screwed up regarding something these days, I point that out directly as well.

My policy of always trying to honestly write about issues using a “call ’em as I see ’em” philosophy has left a lot of partisans unhappy on both sides of the political spectrum, who view any variance from “the party line” on any given matter to be both dangerous and intolerable.

This has been a reality to one extent or another since the earliest ARPANET days when I first began publicly posting, but has in recent years blown up into an orders of magnitude more vicious state of affairs.

For example, late last week I spoke about Google on a national radio venue where I very frequently guest, and pushed  back against the false claims of some national GOP politicians, who were again parroting the Big Lie that Google purposely suppressed and undermined conservative viewpoints (the trigger this time was a search results Knowledge Panel error due to a defaced Wikipedia source page).

I’m usually happy to do this — I get paid nuthin’ for these appearances — I value the opportunity to speak some truth before these very large audiences that all too often are trapped in propagandistic, anti-technology filter bubbles where outright lies about firms like Google are common currency.

It’s gratifying to so frequently the next day get emails that say variations of “Thanks for that — nobody ever explained it to me that way before!”

But over time, and especially since the 2016 elections, the worst aspects of our toxic political environment have been contaminating more and more of these discussions, to the extent that my on-air comments supporting Google last week — perhaps because Donald Trump, Jr. was involved — have triggered a hate speech campaign that is rather sickening to behold. 

This has happened before — and I have a pretty thick skin.

Yet this time it feels different. I find myself wondering why the blazes I keep sticking my neck out this way. This isn’t my job. I don’t get paid for anything I write or say these days — I’m long term unemployed and try to get by with whatever sporadic and limited consulting I can dredge up from time to time.

More to the point, one wonders — especially with so much at stake — why Google isn’t taking a more proactive stance to protect the company, their employees, and the global community that depends on them — from the ongoing torrent of politically-motivated lies and attacks that are clearly designed to set the stage for broad censorship and government micromanagement of data for political purposes! Why doesn’t Google have employees out there doing what I’m doing? Why does Google continue to create a vacuum through their silence, a vacuum that haters fill with outright lies that most onlookers have no simple way to differentiate from the truth?

Of course we already know part of the answer. Google is famously terrified of the so-called “Streisand Effect” — the fear that even retorting lies will lend credence and more attention to them.

20 or even arguably 10 years ago, this might on balance have been a reasonable philosophy for Google to practice as a cautious firm.

But today, I’m increasingly convinced that Google’s not fighting back against these lies in every possible legitimate way amounts to a kind of corporate “death wish” that is ultimately putting everything good that Google has built and stands for at terrible risk.

And if Google loses this war, we all lose.

Governments, politicians, and other entities (including not only the alt-right but also many elements of more conventional left and right-wing politics as well), are using Google’s reticence for battle as a green light for the acceleration of anti-Google efforts to push intolerant information-control agendas on national, transnational, and global scales.

If such forces succeed in decimating Google in the manners that are being postulated, the results could be catastrophic for free speech around the planet.

Knowing Googlers as I do, it seems certain that most of them see these dangers very clearly from the inside — yet the “death wish” in terms of how Google actually communicates with the outside world seems more encompassing than ever.

This makes me very sad — and as I said above very angry as well.

The deep, dank pit looms before us, and the razor sharp blade of the pendulum descends closer with every tick of the clock. Either we deal with these issues seriously and effectively now, or very soon we’ll find that our wonderful hoped-for tomorrows have turned into nothing but a putrid, rotting pile of wasted yesterdays.

And that’s the truth.

–Lauren–

Google — and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

– – –

A few months ago, in “The Ethics of Google and the Pentagon Drones” –  https://lauren.vortex.com/2018/03/06/the-ethics-of-google-and-the-pentagon-drones – I discussed some of the complicated nuances that can come into play when firms like Google engage with military contracts that are ostensibly for defensive purposes, but potentially could lead to offensive use of artificial intelligence technologies as well. This is not a simple matter. I was myself involved with Defense Department projects many years ago (including the Internet’s ancestor ARPANET itself), as I explained in that post.

The focal point for concerns inside Google in this regard (triggering significant internal protests and some reported resignations) revolve around the U.S. Department of Defense (DoD) “Project Maven” — aimed at using A.I. technology for drone image analysis, among other possibilities.

Now a 27 page DoD presentation document regarding Maven is in circulation, and frankly it is discomforting and disturbing to view. It is officially titled:

“Disruption in UAS: The Algorithmic Warfare Cross-Functional Team (Project Maven)”

And it sends a chill down my spine precisely because it seems to treat the topic rather matter-of-factly, almost lightheartedly.

There are photos of happy surfers. The project patch features smiling, waving cartoon robots who would fit right into an old episode of “The Jetsons” — with a Latin slogan that roughly translates to “Our job is to help.” Obviously DoD has learned a lesson from that old NSA mission patch that showed an enormous octopus with its tentacles draped around the Earth.

You can see the entire document here:

https://www.vortex.com/dod-maven

I stand by my analysis in my post referenced above regarding the complicated dynamics of such projects and their interplay with technology firms such as Google.

However, after viewing this entire Project Maven document, I have a gut feeling that long-term participation in this project will not turn out well for Google overall.

To be sure, there will likely be financial gains related to resources provided to DoD for this project — but at the cost of how much good will inside the company among employees, and in terms of potentially negative impacts on the firm’s public image overall?

Certainly the argument could be made that it’s better that a firm with an excellent ethical track record like Google participate in such projects, rather than only traditional defense contractors — some of whom have a long history of profiting from wars with little or no regard for ethical considerations.

But over the years I’ve seen good guys get trapped by that kind of logic, and once deeply immersed in the battlefield military-industrial complex it can be difficult to ever extricate yourself, irrespective of good intentions.

Thankfully from my standpoint, this isn’t a decision that I have to make. But while I don’t claim to have a functional crystal ball, I’ve been around long enough that my gut impressions regarding situations like this have a pretty good track record.

I sincerely hope that Google can successfully find its way through this potential minefield. For a great company like Google with so many great employees, it would be a tragedy indeed if issues like those related to Project Maven did serious damage to Google and to relationships with Googlers going forward.

–Lauren–

Calls for a Google Ombudsman — from Nine Years Ago!

Back in 2009, “Techdirt” posted “Does Google Need An Ombudsman?” – https://www.techdirt.com/articles/20090302/0125093942.shtml — excerpted below. Here we are nine years later, and that need is demonstrably far greater now! “Techdirt” back then was referring to some of my earliest of what would ultimately be many posts about this topic.

– – – – – – – –

Lauren Weinstein has an interesting discussion going on his blog, noting a series of recent incidents where Google has done a spectacularly poor job in communicating with the public — something I’ve been critical of the company about, as well. The company can be notoriously secretive at times, even when being a lot more transparent would help. Even worse, the company is quite difficult to contact on many issues, unless you happen to know people there already. Its response times, if you go through the “official channels,” are absolutely ridiculous (if they respond at all). Weinstein’s suggestion, then, is that Google should set up a small team to play an ombudsman role — basically acting as the public’s “representative” within the company …
         —  Mike Masnick – “Techdirt” – March 3, 2009

 – – – – – – – –

–Lauren–

I Join EFF in Opposing the California SB 1001 “Bots Disclosure” Legislation

The Electronic Frontier Foundation recently announced their opposition to California Senate Bill SB 1001, which mandates explicit “I am not a human” disclosure notices relating to all manner of automated reply, response, and other computer-based systems.

While it’s certainly the case that considerable controversy was triggered by Google’s demonstration earlier this month of their AI-based “Duplex” phone calling system ( “Teachable Moment: How Outrage Over Google’s AI ‘Duplex’ Could Have Been Avoided” – https://lauren.vortex.com/2018/05/11/teachable-moment-how-outrage-over-googles-ai-duplex-could-have-been-avoided), Google reacted quickly and appropriately by announcing that production versions of the system would identify themselves to called parties.

Voluntary approaches like this are almost always preferable to legislative “fixes” — the latter all too often attempt to swat flies using nuclear bombs, with all manner of negative collateral damage.

Such is the case with the California Senate’s SB 1001, which would impose distracting, confusing, and disruptive labeling requirements on a vast range of online systems of all sorts, the overwhelming majority of which are obviously not pretending to be human beings in misleading ways.

Even worse, the legislation states that these systems are assumed to purposely be attempting to mislead unless they explicitly identify themselves as being non-humans. This is a ludicrous assumption — the legislation would be at least a bit more palatable if it was restricted to situations where a genuine intent to mislead was present, such as automated telemarketing phone spam.

The labeling requirements imposed by SB 1001 would make the obnoxious scourge of “We use cookies! Click here if you understand!” banners (the result of misguided EU regulatory actions) look like a walk in the park by comparison.

While automated communications systems will not be immune to misuse, SB 1001 will not stop such abuse and will cause massive confusion for both site operators and users. It is not only premature, it is a textbook example of overly broad and badly written legislation that was not adequately thought through.

SB 1001 should not become law.

–Lauren–

Android In-App Payments Abuse Nightmares: Why Google Is Complicit

UPDATE (May 26, 2018): To be clear about this, I would so much prefer that Google had an Ombudsman, Ombudsman team, or similar set of roles internally, to deal with situations as described in this updated post. While I’m glad to try help when I can, and I greatly thank Google for their quick response in this case and the issuing of a full refund to this Android user, it shouldn’t require public actions from someone on the outside of Google like me to drive the appropriate resolution of such cases.

UPDATE (May 25, 2018): I’ve just been informed that a full refund has now been issued in the case I discussed in my post below from yesterday. I hope that the general class of issues described therein, especially the presence of expensive in-app “virtual” purchases targeted at children — and the specific operations of Android parental control mechanisms — will still be addressed going forward. In the meantime, my great thanks to Google for quickly doing the right thing in this case of a (now very happy) Android user and her child. 

– – –

Should an Android app aimed at children include a $133 in-app purchase for worthless virtual merchandise? If you’re the kind of crook who runs fake real estate “universities” and stiffs your workers via multiple bankruptcies, you’d probably see nothing wrong with this. But most ethical people might wonder why Google would permit such an abomination. Is the fact that they take a substantial cut of each such transaction clouding their usually excellent ethical sensibilities in this case? Or is Google somehow just unaware, underestimating, or de-emphasizing the scope of these problems?

Complaints regarding in-app Android purchases arrive at my inbox with notable regularity. But one that arrived recently really grabbed my attention. Rather than attempt to summarize it, I’m including extended portions of it below (anonymized and with the permission of the authors).

Beyond the details of how parental locks and Google Play Store payment systems are designed and the ways in which they could be greatly improved, a much more fundamental problem is at the core of these issues. 

I have long considered in-app purchase models to be open for enormous abuse. Where they are used to “unlock” actual capabilities in non-gaming applications, they can play a useful role. But their use for the purchase of worthless “virtual” goods or points in games, especially when total purchases over the lifetimes of the games can add up to more than a few dollars, are difficult to justify. They are impossible to justify in games that are targeted at children. 

Though apparently entirely legal, it is unconscionable that Google permits these sorts of apps to exploit children and their parents, and then refuses to offer full refunds to parents who have been victimized as a result, particularly when those parents have attempted to diligently use the payment control mechanisms that are currently available.

Not Googley at all. Shame on you, Google.

–Lauren–

 – – – – – –

Hi Lauren,

Thanks so much for considering this. is:  @gmail.com  – she’s fine with you sharing that with Google.

If it can happen to someone of her education what hope do the rest of us have… let alone a 4-year old who can’t read. She says also it’s fine to share her story, fully anonymised … It’s pretty horrible and I suspect also pretty widespread too….

On 05/23 09:16, wrote:

hi Lauren,

I’m sure you’ve heard lots of these kinda stories, so your indulgence is requested. Friend of mine – who holds a doctorate in business, no less – got a bill for around GBP 650 after her 4-year old daughter was able to buy in-game despite parental locks. Or, that’s what my friend thought: Google said that updating the unit could wipe out those locks. And no refund is thus forthcoming. She has contacted the app developers too but obviously they’re happy enough with her money so nothing doing there.

Two things:

(1) Why does an update clear locks? This is surely bad practice?

(2) How the hell can anyone justify a GBP 100 in-app purchase in a game directed at toddlers? This one can’t read yet and as we know, kids are experts on using touchscreen tech before any language skills develop.

P.S. any advice welcome – thanks loads

– – –

My 4 year old loves watching . On (Freeview) one of her favourite cartoons is . She loves this so much that she asked if she could download on my mobile phone to play. I obliged and made the usual checks; no ads, and parental locks engaged. She then asked to download another similar game; . She absolutely loves this game, and for a 4 year old, she’s got pretty good at it… certainly better than me and her big sis.

Again, I made sure parental lock and no ads were ticked within the app…. Last Friday I received a telephone call from the Fraud dept. at  Credit Card, they suspected fraudulent activity on my card – in fact one transaction of GBP 99.99 and another of GBP 1.99 had gone on my card that morning.. and I hadn’t even left my house. I was obviously shocked and concerned – they said the payee was Google Play.

They asked if I had an android phone and whether I let me kids play on the phone. I said yes, but all games are ‘locked down’ so to speak. She asked me to go into my phone to check… to my sheer horror, I saw a long list of  ‘in-app’ purchases made by my 4 year old within the space (mostly) of three weeks. Now I usually check my credit card spends at the end of every month, and I hadn’t got around to checking for this month. I quickly toted up the separate transactions and figured that she had burned GBP 498.88  buying ‘.. GBP 99.99 and ‘ 1.99/ 29.99’ within the game.

I was totally in shock and rightly upset. Of course this wasn’t her fault  – she can’t read.. but how can an app associated with a children’s cartoon think its OK to embed in app purchases within their game … Google have informed me that updating my android can wipe out all the parental locks etc, and I have to check/ re-engage all locks etc after EVERY software upgrade. I contacted Google, and they have disappointingly refunded only GBP 70.00 – stating that its outside their T&Cs and that I need to request a refund from ; the App developers.

I’ve emailed , they haven’t bothered to respond (I’ve waited 72 hours and counting now) . I’ve also contacted Credit Card, and they’ve said that they won’t help me… Surely this is ‘Soft Fraud’ and this is unethical and wrong… so parents please beware. This has and still is really upsetting for both me and my daughter. Please share and just be hyper careful on your phones. Here is most of her spending spree!! 

– – – – – –

Using Google’s Daydream VR Headset for Augmented Reality and Positional Tracking Applications

UPDATE (September 21, 2018): You can experiment with these capabilities using the procedures and apps available here: https://github.com/ChristophGeske/ARCoreInsideOutTrackingGearVr

– – –

When paired with suitable higher-end Google, Samsung, or various other brands of smartphones, the Google Daydream VR headset (currently in its second generation “2017” version, which is the version I’m discussing in this post) offers an extremely inexpensive path for “virtual reality” and other related experiences and experiments (the headset sometimes goes on sale for as little as $50).

In addition to of course being able to display Daydream-compatible VR apps, when a suitable Samsung phone is used it is also possible (via an interesting sequence of actions) to use many Oculus/Samsung Gear VR headset apps with the Daydream headset as well (feel free to contact me if you’re interested in the details on this).

At first glance (no pun intended) one would assume that Daydream headsets are unsuitable for “augmented reality” VR applications that require use of the phone camera, since the Daydream flap that holds the phone in place completely blocks the back of the phone and the camera lens.

This also seemingly eliminates the possibility of Daydream headset experimentation with “inside-out” 6DOF (six degrees of freedom) positional tracking applications, which could otherwise leverage the phone’s camera and Google’s “ARCore” platform to provide these capabilities that conventionally have only been available with far more expensive VR headsets.

We might consider cutting a hole through the rather thick flap of the headset (which also includes an integral heat sink — important when the flap is closed), but that’s messy at best, risks accidentally damaging embedded NFC tags, and is dependent on the exact position of the camera lens for any specific phone.

So here’s my alternative that requires zero modification of the Daydream headset itself, and only a few simple parts to achieve — an elastic strap to hold the phone in place with the flap of the headset left open and the phone camera lens exposed for use. The completed strap is simple to install or remove from the headset at any time, since the flap can be pulled outward to create a gap for this purpose.

To view a set of photos showing the assembly sequence and the finished design, please visit:

https://lauren.vortex.com/daydream-mods

I used a piece of elastic that already had a plastic catch on the end of suitable size to hold the elastic in place under the flap hinge. Alternatively almost anything of similar dimensions could be attached to a strip of elastic to achieve the same result.

You simply slide the completed assembly between the flap of the headset and the main part of the headset, strap in the phone, and you’re ready to go. I originally tested this using a metal washer, but decided that even wrapped in tape there was some risk of scratching the phone. A better protected metal washer would probably be fine. I printed up a custom-sized plastic washer to use instead.

The elastic holds the phone in place quite snugly, though with enough violent head motion it might be possible to force the phone to slide out from under the elastic. It should be straightforward to slip little barriers on the sides to avoid this, or simply avoid violent head motions! Also keep in mind that you don’t want to apply significant downward pressure to the open flap, since that would risk potentially breaking the plastic supports that keep it from falling further open.

Anyway, it’s really just the elastic, the washer, and several small cable ties!

OK, it’s a hack. No apologies.

If you have any questions, please let me know!

And of course, be seeing you.

–Lauren–

Google’s Lightning-Fast Response to My “Trusted Contacts” Concerns


Very recently I discussed my concerns regarding several issues related to Google’s “Trusted Contacts” service. Trusted Contacts permits users to send their current location data to other users as notifications.

The situation was triggered when I suddenly began receiving such location data notifications from somebody I’d never heard of in Africa. Address typos? Trying to attract my attention for some other reason? I dunno.

But stopping those emailed notifications was easier said than done, because it turned out that there was no way to do so from a web page, and the only available mechanism to block them was usable only via the Trusted Contacts smartphone app that needed to be installed, which required enabling of Location History which I don’t ordinarily use. After installing the app (which I had no personal interest in installing otherwise, and which of course a person without a suitable smartphone would not have been able to do) I was ultimately able to stop the notifications. Not a good user experience.

Since then, I’ve already been contacted directly by Google’s privacy and maps teams about these issues, and they’ve now implemented the means for users to easily unsubscribe from such notifications via a web page — without the need for installing an app. Other very useful changes related to the issues that I identified are apparently in the pipeline for availability.

My great thanks to the Google teams involved for so rapidly reaching out to me regarding these matters, and especially for the ultra-quick implementation of the web-based Trusted Contacts notifications unsubscribe tool that is now available to desktop users!

–Lauren–

Google Predictably Makes a Confused Jumble Out of New YouTube, Music Offerings

An old saying suggests that the only inevitabilities are death and taxes.  When it comes to Google, there are a couple more that we can add. Google will likely always have an array of often incompatible and overlapping “chat” programs and systems — and their paid video and music offerings will be a maze of twisty passages, all different.

Google hasn’t disappointed in that respect with the manner in which word has gotten out about their latest paid content changes. The one thing that seems clear is that the brand “YouTube Red” is apparently going away. But after that, everything is about as easy to understand as hieroglyphics prior to the discovery of the Rosetta Stone. 

YouTube Premium, YouTube Music. YouTube Music Premium. And what of Google Play Music (for free, purchased, and uploaded music) — which Google in their tweets (trying to calm down confused onlookers on Twitter) says is continuing for now?

I tried to figure it all out last night and got a terrible headache that forced me to quit. This morning, it’s all as clear as mud.

There are a couple of things that I’m fairly sure about. At the moment I’m in Google’s “family plan” for $15/month that gives me both YouTube Red and Google Play Music paid services for up to six accounts. I use it mainly for ad-free YouTube viewing and to be able to simultaneously stream different music to different Google Home devices without conflict, from music sources on Play Music and YouTube.

I’ve been led to believe that for existing subscribers of these services under their new names, there are no immediate price changes — though likely that’s coming down the line. It appears that obtaining the same mix of content under Google’s new plans will cost new subscribers more (though they may be able to lock in current prices for a time if they subscribe to the existing plans before the new plans launch reportedly next week).

But how much more will the new services cost going forward? Perhaps the Sphinx could figure it all out. I’ve seen so many different numbers and combinations of services now — not to mention that the future and form of Play Music still seems up in the air — that the only thing seeming certain is uncertainty itself.

I do know that for essentially the same paid mix of video and music content that I receive now from Google, I’d personally probably be willing to pay a wee bit more. But not much more and/or for a more limited set of content. In such latter eventualities, I’d be tempted to drop all of these Google paid content services entirely.

For the moment though, I think that I will sit tight for a bit, and wait for some sort of clarity to hopefully eventually shine its light on this current but predictable Google communications confusion.

Isn’t it nice to have a hobby?

–Lauren–

The Amazing 360 VR “Scoring the Last Jedi” Video


I just watched “Scoring The Last Jedi: a 360 VR Experience” – https://youtu.be/b32Q1nTgHM0 – for the first time, via a Google Daydream VR headset. It’s absolutely stunning, especially if you’re a lifetime fan of film scoring as I am.

Frankly, I was smiling like an idiot through the entire video. Put aside the flight simulators and the games for a moment — if you have the VR hardware to watch this baby (Google Cardboard will work fine too, if your smartphone has a good gyro), it demonstrates — better than anything else I’ve seen so far — what the potential is for VR to transport you almost physically to a different time and place.

Be warned, just watching this on YouTube without a VR headset is like an ant compared with a skyscraper. You really must see it in VR to properly experience this video.

As I’ve said previously, this kind of tech will ultimately either save civilization — or destroy it. It’s really that important.

Trust me on this.

–Lauren–

Warning: Dangerous Fake Emails About Google Privacy Changes

If you use much of anything Google, by now you’ve likely gotten at least one email from Google noting various privacy-related changes. They typically have the Subject:

Improvements to our Privacy Policy and Privacy Controls

and tend to arrive not from the expected simple “google.com” domain, but often from unusual-appearing Google subdomains, for example with addresses like:

privacy-noreply@www3.l.google.com

The notice also includes a bunch of links to various relevant privacy pages and/or systems at Google.

All of this is in advance of the effective date for the European Union’s “GDPR” laws. If you’re not familiar with the GDPR, it’s basically the latest hypocritical move by the EU on their relentless march toward dictating the control of personal data globally and to further their demands to become a global censorship czar — with the ability to demand the deletion of any search engine results around the world that they find inconvenient. Joseph Stalin would heartily approve.

One can assume that Google’s privacy team has been putting in yeoman’s service to meet the EU’s dictatorial demands, and it’s logical that Google decided to make other changes in their privacy ecosystem at the same time, and now is informing users about those changes.

Unfortunately, phishing crooks are apparently already taking advantage of this situation — in particular several aspects of these Google notification emails.

First, the legitimate Google privacy emails going out recently and currently are a veritable flood. It appears that Google is sending these out to virtually every email address ever associated with any Google account since perhaps the dawn of time. I’ve already received approximately 1.3E9 of them. OK, not really that many, but it FEELS like that many.

Some of these are coming in to addresses that I don’t even recognize. This morning one showed up to such a strange address that I had to go digging in my alias databases to figure out what it actually was. It turned out to be so ancient that cobwebs flew out of my screen at me when I accessed its database entry.

Seriously, these are one hell of a lot of emails, and the fact that they may come from somewhat unusual looking google subdomains plus include links has made them fodder for the crooks.

You can guess what’s happening. Phishing and other criminal types are sending out fraudulent emails that superficially appear to be the same as these legit Google privacy policy notification emails. Of course, some or all of the links in the phishing emails lead not to Google but to various evil traps and personal data stealing tricks.

So please, be extraordinarily careful when you receive what appear to be these privacy notices from Google. With so many real ones going out — with multiples often ending up at the same individual via various redirects and forwarding addresses — it’s easy for fake versions to slip in among the real ones, and clicking on the links in the crooked ones or opening attachments that they include can seriously ruin your day, to say the very least.

Take care, all.

–Lauren–

Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

We find ourselves at a “teachable moment” in the history of Artificial Intelligence — we should not squander this opportunity. The global controversy that erupted over the last few days regarding Google’s AI-based  “Duplex” phone calling system can be viewed as a harbinger of things to come if a holistic approach to AI is not a fundamental design factor from the ground up.

The Duplex controversy should be calmed down at least for the moment. Google has now announced that calls made by Duplex will be identified as such to the called party, exactly what I had urged in: “Calls From Google’s ‘Duplex’ System Should Include Initial Warning Announcements” (https://lauren.vortex.com/2018/05/09/calls-from-googles-duplex-system-should-include-initial-warning-announcements).

While there are some observers asserting that Duplex-type technology should be banned — or required to use a “robotic” sounding voice not easily confused with a human — I consider both of those suggestions to be extreme, unnecessary, and counterproductive. This kind of technology can have a range of positive applications. I am very much a supporter of AI research (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

We want the voices to be as humanlike as possible to be as understandable as possible. Full disclosure that the calls are from AI-based Assistants is completely adequate to assuage most related concerns, though how this tech might potentially be abused by users in ways that makes the robocalling flood even worse is still an open question. 

How did we get here? Basically, while Google’s blog post regarding Duplex made a vague mention of transparency, their demos of the system played at Google I/O appeared to show called parties with absolutely no idea that they were talking to an AI. That’s mightily impressive as a showcase for Google’s AI advances.

But it was also immediately obvious to me — as soon as I heard those demos — that this was going to quickly blow up into a big, public mess that could have been easily avoided.  Because there was bound to be an emotional, visceral, negative reaction by many observers to hear a human “manipulated” in that manner by an AI system. It strikes to very heart of many persons’ fears of “intelligent” machines — ranging from sci-fi fantasies to legitimate real world concerns.

All Google needed to do to avoid this entire PR nightmare was to announce at I/O that the system would disclose itself to called parties, and play demos that included such disclosures.

Why didn’t Google do this? I don’t have any inside scoop, but I can make some fairly informed guesses.

Google still considers Duplex to be an experiment. That’s a valid point, but only takes us so far. If you’re only showing the tech internally, that one thing. But as soon as you make a public demo, you’ve lit the fuse, so to speak.

Several reporters who called me about this story asked me variations of the same question: “How come Google didn’t realize how much negative reaction there would be to those demos?” 

And my answer is that while Google’s heart is almost always in the right place when it comes technologies, sometimes they get a bit, shall we say, “overenthusiastic” — which can make it more difficult to anticipate the non-engineering aspects of a system and how it will be perceived.

This is not just a Google issue. It’s a endemic problem at many engineering-oriented firms, and as a techie myself I’ve had to push back sometimes against my own overly enthusiastic feelings about some new technologies.

Over the years, I’ve numerous times been in the position of trying to point out these kinds of problems on the Net and elsewhere. It’s a tough sell. We engineering types can be tightly focused on solving engineering problems, and we don’t like being told that perhaps we should broaden our focus a bit. And then there’s the classic Silicon Valley attitude of pushing ahead whenever possible and just apologizing later when things go wrong. 

I’m concerned that these are no longer viable strategies — or at least that they’re ever more risky ones in our toxic political environment, where regulators and politicians increasingly see reason and technology as enemies to attack for their own political gains.

Firms like Google and so many others should be subjecting their product launches to more than just engineering and privacy reviews. We’ve reached a stage where some sort of what we might call “ethical” reviews are needed as well and just as routinely. There are many different ways that these could be accomplished, and I won’t detail them here right now.

But without these kinds of formal review mechanisms to consider the ethical impacts of these technologies, the probability of public and political blowback against both potentially very beneficial AI and other socially positive tech projects will be dramatically increasing.

–Lauren–

Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 10, 2018): Reacting to the controversies surrounding Duplex, Google today announced that the system will identify itself to called parties. I applaud this decision.

– – –

I’ve already received many reactions to my posting yesterday of “Google’s New ‘Duplex’ Voice System Creates Some Troublesome Questions” (https://lauren.vortex.com/2018/05/08/googles-new-duplex-voice-system-creates-some-troublesome-questions) — discussing Google’s new system capable of conducting telephone conversations that may be indistinguishable from human callers. This is fascinating work on Google’s part and a potentially very positive use of their AI assets and expertise, but as I noted also carries significant risks for confusion and potential abuse.

With no exceptions so far, the sense of these reactions has confirmed what I suspected — that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line.  To use the vernacular: “Don’t try to con me, bro!”

Luckily, there’s a relatively simple way to fix this problem at this early stage — well before it becomes a big issue impacting many lives.

I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google’s Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being — the exact wording of that announcement to be determined.

I would also assert that any similar systems from other firms that can be easily confused with human callers provide similar start of call announcements.

I’d enormously prefer that all affected firms agree to provide these warnings voluntarily, but would support carefully drafted legislation or other regulatory actions to require their use if a voluntary approach proves to be insufficient.

Getting ahead of the potential pitfalls in this technology is crucial to help assure that these systems are widely accepted and able to achieve their intended beneficial results, rather than turning into a painful policy flashpoint down the line.

–Lauren–

Google’s New “Duplex” Voice System Creates Some Troublesome Questions

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 9, 2018): Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

– – –

Google today demonstrated a project of theirs under development called “Duplex” — which permits their Google Assistant infrastructure to conduct unassisted two-way voice phone calls for tasks like making appointments. In their demos, employees at the firms being called apparently had no idea that they were talking to a machine. Very impressive, indeed.

But though I’m personally a fan of Assistant — I have several Assistant “Home” devices myself, plus Assistant on my phones — something about those demos today made me immediately uneasy. When I mentioned this on my mailing lists during the day, I was surprised by how many people responded with variations of “I’d be upset if I was conned into thinking that a real person was calling me when it was really a computer.”

And yeah, it would bug me too. I’m not completely sure why. I suspect it’s an aspect of human nature — and Google tends to occasionally have something of a blind spot in that respect. My guess is that most people don’t much mind talking to machines so long as they know that they’re talking to machines. But Duplex even throws in natural sounding “uh” utterances and such — technically perhaps sometimes to cover processing delays, but there’s no denying that humans would tend to perceive these as direct attempts to convince them that a human was at the other end of the line.

There was something else. I knew that I had seen something like this before, in an old sci-fi movie. I couldn’t find references on Google that matched, so I had to use my own unaided brain for the lookup. I finally got it.

In the 1977 film “Demon Seed” a woman is held captive in her own home by an advanced AI system created by her estranged husband, and in control of all the smart windows, doors, and other appliances of the house. Her husband isn’t present. He has no idea that this is happening. At various points in the plot, this system makes video phone calls and interacts with persons at the front door via an intercom, in all cases convincing them that they were actually speaking with the woman herself and that all was well.

Leaving aside the sci-fi melodrama, there are some interesting questions here. Is it ethical for computerized systems to interact with humans in a manner that pretends to be human? Even if it’s ethical, is it desirable? What are the possible blowbacks and undesirable possibilities?

Perhaps we can ask the same sort of question traditionally asked of undercover cops: “Are you a police officer?” In our case, we wonder how the system would respond if the called party asked Duplex “Are you a human being?”

Of course it’s relatively early in the development of this tech. But perhaps not too early to begin thinking about these questions. Google itself suggested that an appropriate level of “transparency” would be needed for these systems — I wholeheartedly agree. But what does that actually mean? Perhaps an announcement at the start of each call informing the called party that there were talking to an automated system? The various implications are decidedly nontrivial.

And we must also be thinking about how such systems could be purposely misused. Obviously not in the dramatic manner of that film I mentioned above, but in other ways that involve scams and frauds of various kinds, perhaps as a sort of evil evolution of the current robocalling scourge.

Yet technology itself is never actually good or evil — it’s how we choose to use tech that sets these parameters. And there are all sorts of ways that Duplex could do a lot of good — if it is deployed in manners that help people without trying to fool them, and if it has sufficient safeguards to minimize the risks of abusive applications.

Google has their work cut out for them on this one, because Duplex is a quintessential example of an AI system where getting it working is only half the battle. The other half is assuring to the greatest degree possible that it’s only used for good purposes, and not in evil ways.

–Lauren–

Phishing Risks in Gmail’s New “Confidential Mode”


In my previous post — “New Gmail Features That May Be ILLEGAL for You to Use!” (https://lauren.vortex.com/2018/05/05/new-gmail-features-that-may-be-illegal-for-you-to-use), I noted that I’m very pleased overall with Google’s new Gmail, but also explained how using specific features of new Gmail — in particular “confidential mode” — may have negative legal implications for some users, especially those with legally-mandated data retention requirements.

But there’s another potential issue with “confidential mode” that could be a much broader risk to many more persons — the possibility that this mode will be leveraged by crooks for a new wave of phishing emails designed to spread malware and steal personal information.

The problem arises since non-Gmail users cannot directly receive Gmail confidential mode messages. Instead — as reported in the article that I linked to in my previous post — when a Gmail user wants to send a non-Gmail user such a message, the non-Gmail user is instead sent a link, that when clicked takes them to Google’s servers where they can read the confidential mode message in their browser.

The potential risks for any service that operates in this way are obvious.

Those of us working on Internet security and privacy have literally spent many years attempting to train users to avoid clicking on “to read the message, click here” links in emails that they receive. Criminals have simply become too adept at creating fraudulent emails that lead to phishing and malware sites.

Our efforts have had only limited success. Millions of busy users even click on such fake links in messages that are not even particularly “well crafted” or seem “obviously” fake as viewed by those of us who have long been dealing with such issues.

Any email system that depends on users clicking on emailed links to access remotely hosted email messages is likely to be subject to such criminal abuses. No matter how much diligence Google employs to try avoid users being fooled by phishing messages masquerading as links to Gmail confidential mode messages, we can be reasonably sure that this could ultimately represent a significant new vector of attack for phishing operations that will successfully ensnare large numbers of Internet users over time.

Given the relatively poor success that educational and message authentication efforts have had to date in these respects, there are no obvious solutions immediately apparent if Google operates their Gmail confidential mode in the manner described.

–Lauren–