Fighting Government Crippled Encryption by Turning It Off Entirely!

Within hours of the terrible terrorist attack in Manchester earlier this week, UK politicians were already using the tragedy as a springboard to push their demands that Internet firms cripple their encryption systems and deploy a range of other Orwellian measures that would vastly weaken the privacy and security of honest citizens — while handing terrorists and other criminals the keys to our private lives, including our financial and other personal information.

This same thuggish mindset is taking root in other parts of the world, often combined with hypocritical “data localization” requirements designed to make individual nations’ citizens as vulnerable as possible to domestic surveillance operations.

There are basically four ways in which firms can react to these draconian government demands.

They could simply comply, putting their users at enormous and escalating risk, not only from government abuse but also from criminals who would exploit the resulting weak encryption environments (while using “unapproved” strong encryption to protect their own criminal activities). We could expect some firms to go this route in an effort to protect their financial bottoms lines, but from an ethical and user trust standpoint this choice is devastating.

Firms could refuse to comply. Litigation might delay the required implementation of crippled encryption, or change some of its parameters. But in the final analysis, these firms must obey national laws where they operate, or else face dramatic fines and other serious sanctions. Not all firms would have the financial ability to engage in this kind of battle — especially given the very long odds of success.

Of course, firms could indeed choose to withdraw from such markets, perhaps in conjunction with geoblocking of domestic users in those countries to meet government prohibitions against strong encryption. Pretty awful prospects.

There is another possibility though — that I’ll admit up front would be highly controversial. Rather than crippling those designated encryption systems in those countries under government orders, firms could choose to disable those encryption systems entirely!

I know that this sounds counterintuitive, but please hang with me for a few minutes!

In this context we’re talking mainly about social media systems where (at least currently) there are no government requirements that messages and postings be encrypted at all. For example, we’re not speaking here of financial or medical sites that might routinely have their own encryption requirements mandated by law (and frankly, where governments usually already have ways of getting at that data).

What governments want now is the ability to spy on our personal Internet communications, in much the same manner as they’ve long spied on traditional telephone voice communications.

An axiom of encryption is that in most situations, weak encryption can be much worse for users than no encryption at all! This may seem paradoxical, but think about it. If you know that you don’t have any encryption at all, you’re far more likely to take care in what you’re transmitting through those channels, since you know that they’re vulnerable to spying. If you believe that you’re protected by encryption, you’re more likely to speak freely.

But the worst case is if you believe that you’re protected by encryption but you really aren’t, because the encryption system is purposely weak and crippled. Users in this situation tend to keep communicating as if they were well protected, when in reality they are highly vulnerable.

Perhaps worse, this state of affairs permits governments to give lip service to the claim that they favor encryption — when in reality the crippled encryption that they permit is a horrific security and privacy farce.

So here’s the concept. If governments demand weak encryption, associated legal battles have ended, and firms still want to serve users in the affected countries, then those firms should consider entirely disabling message/posting encryption on those social media platforms in the context of those countries — and do so as visibly and loudly as possible.

This could get complicated quickly when considering messages/posts that involve multiple countries with and without encryption restrictions, but basically whenever user activities would involve nations with those restrictions, there should be warnings, banners, perhaps even some obnoxious modal pop-ups — to warn everyone involved that these communications are not encrypted — and to very clearly explain that this is the result of government actions against their own citizens. 

Don’t let governments play fast and loose with this. Make sure that users in those countries — and users in other countries that communicate with them — are constantly reminded of what these governments have done to their own citizens.

Also, strong third-party encryption systems not under government controls would continue to be available, and efforts to make these integrate more easily with the large social media firms’ platforms should accelerate.

This is all nontrivial to accomplish and there are a variety of variations on the basic concept. But the goal should be to make it as difficult as possible for governments to mandate crippled encryption and then hypocritically encourage their citizens to keep communicating on these systems as if nothing whatever had changed.

We all want to fight terrorism. But government mandates weakening encryption are fundamentally flawed, in that over time they will not be effective at preventing evildoers from using strong encryption, but do serve to put all law-abiding citizens at enormous risk.

We must resist government efforts to paint crippled encryption targets on the backs of our loved ones, our broader societies, and ourselves.

–Lauren–

Is Google’s New “Store Sales Measurement” System a Privacy Risk?

Within hours of Google announcing their new “Store Sales Measurement” system, my inbox began filling with concerned queries. I held off responding on this until I could get additional information directly from Google. With that now in hand I feel comfortable in addressing this issue.

Executive Summary: I don’t see any realistic privacy problems with this new Google system.

In a nutshell, this program — similar in some respects to a program that Facebook has been operating for some time — provides data to advertisers that helps them determine the efficacy of their ads displayed via Google when purchases are not made online.

The crux of the problem is that an advertiser can usually determine when there are clicks on ads that ultimately convert to online purchases via those ads. But if ads are clicked and then purchases are made in stores, that information is routinely lost.

Our perception of advertising has always been complex — to call it love/hate would be a gross understatement. But the reality is that all of this stuff we use online has to be paid for somehow, even though we’ve come to expect most it to be free of direct charges.

And with the rise of ad blockers, advertisers are more concerned than ever that their ads are relevant and effective (and all else being equal, studies show that most of us prefer relevant ads to random ones).

Making this even more complicated is that the whole area of ad personalization is rife with misconceptions.

For example, the utterly false belief that Google sells the personal information of their users to advertisers continues to be widespread. But in fact, Google ad personalization takes place without providing any personal data to advertisers at all, and Google gives users essentially complete control over ad personalization (including the ability to disable it completely), via their comprehensive settings at:

https://www.google.com/settings/ads

Google’s new Store Sales Measurement system operates without Google obtaining individual users’ personal purchasing data. The system is double-blind and deals only with aggregated information about the value of total purchases. Google doesn’t learn who made a purchase, what was purchased, or the individual purchase prices. 

Even though this system doesn’t involve sharing of individual users’ personal data, an obvious question I’ve been asked many times over the last couple of days is: “Where did I give permission for my purchase data to be involved in a program like this at all, even if it’s only in aggregated and unidentified forms?”

Frankly, that’s a question for the bank or other financial institution that issues your credit or debit card — they’re the ones that have written their own foundational privacy policies. 

But my sense is that Google has bent over backwards to deploy their new system with additional layers of user privacy protections that go far beyond the typical privacy policies of those institutions themselves.

My bottom line on all this is that, yeah, I understand why many persons are feeling a bit nervous about this kind of system. But in the real world, we still need advertising to keep the Web going, and when a firm has jumped through the hoops as Google has done to increase the value of their advertising without negatively impacting user privacy in the process, I really don’t have any privacy or other associated concerns.

I only wish that all firms showed this degree of diligence.

Don’t hold your breath waiting for that.

–Lauren–