Google Security’s User Confusion Continues

As I’ve noted many times, Google has world-class security and privacy teams. Great people.

But at least judging from the Google-related queries I get in my inbox every day, Google’s expanding efforts to warn users about perceived security issues are sowing increasing confusion and in some cases serious concerns, especially among nontechnical users who depend upon Google’s products and services in their daily lives.

A new example popped up today that I’ll get to in a moment, but I’ve been discussing these issues for quite a while, e.g.:

“When Google’s Chrome Security Warnings Can Do More Harm Than Good” –


“Here’s Where Google Hid the SSL Certificate Information That You May Need” –

In a nutshell, Google’s continuing efforts at increasing user security — while utterly justifiable at the technical level — continue to marginalize many users who don’t really understand what Google is doing, are confused by Google’s security and other warnings, can’t effectively influence websites with “poor” security to make security improvements, and have no alternatives to accessing those sites in any case.

These are real people — I believe many millions of them — and I do not believe that Google really understands how important they are and how Google is leaving them behind.

Today brought yet another illustrative example that yes, even confused me for a time.

It involves cat food.

A friend forwarded me an email from PetSmart that included a link for an individualized 30% off coupon that they intended to use to buy cat food. That’s a damned good coupon, especially for those of us who aren’t rolling in dough. I wish I had a coupon like that today for Leela the Siamese Snowshoe.

The concern with this email was that every time the user clicked on the link in Gmail to access the site where the coupon could be printed, Gmail popped a modal security warning:

“Suspicious link – This link leads to an untrusted site. Are you sure you want to proceed to click”

You can see a screenshot at the bottom of this post.

The obvious questions: What the hell does “suspicious link” mean in this context? What does Google mean by “untrusted site” in this scope?

There are no links to explanations, and if you Google around you can find lots of people asking similar questions about this class of Gmail warning, but no definitive answers, just lots of (mostly uninformed) speculation.

So I spent about 15 minutes digging this one down. Is a phishing domain targeting PetSmart users? Apparently not. It’s registered to ExactTarget, Inc. and has been registered since 2012. So while there’s no obvious authoritative mention of PetSmart there, my experience leads me to believe that they’re most likely a legit marketing partner of PetSmart, providing those emails and coupon services.

Of course, I still have no information about why Google is tagging them as suspicious. Is it the lack of https: security on the URL? Is it some aspect of their email-petsmart naming schema?

Damned if I know. Google isn’t telling me. And how would the average non-techie be expected to unravel any of this?

I told the user to go ahead and click the link. They got their coupon. Their kitties should be happy.

I’m not happy.

In the real world, most users don’t understand this stuff at the level they need to make truly informed decisions. So they’re forced — simply to get on with their lives every day — to click through such warnings blindly, to get to where they need to go.

And make no mistake about it, these kinds of scenarios are teaching these users absolutely abysmal security habits.

Google is terrific at tech. But Google is still struggling when it comes to understanding the broad range of their users and those users’ needs — particularly the non-techies — and especially how to communicate with those users effectively.

Google can do much better.


Fighting Government Crippled Encryption by Turning It Off Entirely!

Within hours of the terrible terrorist attack in Manchester earlier this week, UK politicians were already using the tragedy as a springboard to push their demands that Internet firms cripple their encryption systems and deploy a range of other Orwellian measures that would vastly weaken the privacy and security of honest citizens — while handing terrorists and other criminals the keys to our private lives, including our financial and other personal information.

This same thuggish mindset is taking root in other parts of the world, often combined with hypocritical “data localization” requirements designed to make individual nations’ citizens as vulnerable as possible to domestic surveillance operations.

There are basically four ways in which firms can react to these draconian government demands.

They could simply comply, putting their users at enormous and escalating risk, not only from government abuse but also from criminals who would exploit the resulting weak encryption environments (while using “unapproved” strong encryption to protect their own criminal activities). We could expect some firms to go this route in an effort to protect their financial bottoms lines, but from an ethical and user trust standpoint this choice is devastating.

Firms could refuse to comply. Litigation might delay the required implementation of crippled encryption, or change some of its parameters. But in the final analysis, these firms must obey national laws where they operate, or else face dramatic fines and other serious sanctions. Not all firms would have the financial ability to engage in this kind of battle — especially given the very long odds of success.

Of course, firms could indeed choose to withdraw from such markets, perhaps in conjunction with geoblocking of domestic users in those countries to meet government prohibitions against strong encryption. Pretty awful prospects.

There is another possibility though — that I’ll admit up front would be highly controversial. Rather than crippling those designated encryption systems in those countries under government orders, firms could choose to disable those encryption systems entirely!

I know that this sounds counterintuitive, but please hang with me for a few minutes!

In this context we’re talking mainly about social media systems where (at least currently) there are no government requirements that messages and postings be encrypted at all. For example, we’re not speaking here of financial or medical sites that might routinely have their own encryption requirements mandated by law (and frankly, where governments usually already have ways of getting at that data).

What governments want now is the ability to spy on our personal Internet communications, in much the same manner as they’ve long spied on traditional telephone voice communications.

An axiom of encryption is that in most situations, weak encryption can be much worse for users than no encryption at all! This may seem paradoxical, but think about it. If you know that you don’t have any encryption at all, you’re far more likely to take care in what you’re transmitting through those channels, since you know that they’re vulnerable to spying. If you believe that you’re protected by encryption, you’re more likely to speak freely.

But the worst case is if you believe that you’re protected by encryption but you really aren’t, because the encryption system is purposely weak and crippled. Users in this situation tend to keep communicating as if they were well protected, when in reality they are highly vulnerable.

Perhaps worse, this state of affairs permits governments to give lip service to the claim that they favor encryption — when in reality the crippled encryption that they permit is a horrific security and privacy farce.

So here’s the concept. If governments demand weak encryption, associated legal battles have ended, and firms still want to serve users in the affected countries, then those firms should consider entirely disabling message/posting encryption on those social media platforms in the context of those countries — and do so as visibly and loudly as possible.

This could get complicated quickly when considering messages/posts that involve multiple countries with and without encryption restrictions, but basically whenever user activities would involve nations with those restrictions, there should be warnings, banners, perhaps even some obnoxious modal pop-ups — to warn everyone involved that these communications are not encrypted — and to very clearly explain that this is the result of government actions against their own citizens. 

Don’t let governments play fast and loose with this. Make sure that users in those countries — and users in other countries that communicate with them — are constantly reminded of what these governments have done to their own citizens.

Also, strong third-party encryption systems not under government controls would continue to be available, and efforts to make these integrate more easily with the large social media firms’ platforms should accelerate.

This is all nontrivial to accomplish and there are a variety of variations on the basic concept. But the goal should be to make it as difficult as possible for governments to mandate crippled encryption and then hypocritically encourage their citizens to keep communicating on these systems as if nothing whatever had changed.

We all want to fight terrorism. But government mandates weakening encryption are fundamentally flawed, in that over time they will not be effective at preventing evildoers from using strong encryption, but do serve to put all law-abiding citizens at enormous risk.

We must resist government efforts to paint crippled encryption targets on the backs of our loved ones, our broader societies, and ourselves.


Is Google’s New “Store Sales Measurement” System a Privacy Risk?

Within hours of Google announcing their new “Store Sales Measurement” system, my inbox began filling with concerned queries. I held off responding on this until I could get additional information directly from Google. With that now in hand I feel comfortable in addressing this issue.

Executive Summary: I don’t see any realistic privacy problems with this new Google system.

In a nutshell, this program — similar in some respects to a program that Facebook has been operating for some time — provides data to advertisers that helps them determine the efficacy of their ads displayed via Google when purchases are not made online.

The crux of the problem is that an advertiser can usually determine when there are clicks on ads that ultimately convert to online purchases via those ads. But if ads are clicked and then purchases are made in stores, that information is routinely lost.

Our perception of advertising has always been complex — to call it love/hate would be a gross understatement. But the reality is that all of this stuff we use online has to be paid for somehow, even though we’ve come to expect most it to be free of direct charges.

And with the rise of ad blockers, advertisers are more concerned than ever that their ads are relevant and effective (and all else being equal, studies show that most of us prefer relevant ads to random ones).

Making this even more complicated is that the whole area of ad personalization is rife with misconceptions.

For example, the utterly false belief that Google sells the personal information of their users to advertisers continues to be widespread. But in fact, Google ad personalization takes place without providing any personal data to advertisers at all, and Google gives users essentially complete control over ad personalization (including the ability to disable it completely), via their comprehensive settings at:

Google’s new Store Sales Measurement system operates without Google obtaining individual users’ personal purchasing data. The system is double-blind and deals only with aggregated information about the value of total purchases. Google doesn’t learn who made a purchase, what was purchased, or the individual purchase prices. 

Even though this system doesn’t involve sharing of individual users’ personal data, an obvious question I’ve been asked many times over the last couple of days is: “Where did I give permission for my purchase data to be involved in a program like this at all, even if it’s only in aggregated and unidentified forms?”

Frankly, that’s a question for the bank or other financial institution that issues your credit or debit card — they’re the ones that have written their own foundational privacy policies. 

But my sense is that Google has bent over backwards to deploy their new system with additional layers of user privacy protections that go far beyond the typical privacy policies of those institutions themselves.

My bottom line on all this is that, yeah, I understand why many persons are feeling a bit nervous about this kind of system. But in the real world, we still need advertising to keep the Web going, and when a firm has jumped through the hoops as Google has done to increase the value of their advertising without negatively impacting user privacy in the process, I really don’t have any privacy or other associated concerns.

I only wish that all firms showed this degree of diligence.

Don’t hold your breath waiting for that.