- - -
As I’ve noted many times, Google has world-class security and privacy teams. Great people.
But at least judging from the Google-related queries I get in my inbox every day, Google’s expanding efforts to warn users about perceived security issues are sowing increasing confusion and in some cases serious concerns, especially among nontechnical users who depend upon Google’s products and services in their daily lives.
A new example popped up today that I’ll get to in a moment, but I’ve been discussing these issues for quite a while, e.g.:
“When Google’s Chrome Security Warnings Can Do More Harm Than Good” –https://lauren.vortex.com/archive/001157.html
“Here’s Where Google Hid the SSL Certificate Information That You May Need” –
In a nutshell, Google’s continuing efforts at increasing user security — while utterly justifiable at the technical level — continue to marginalize many users who don’t really understand what Google is doing, are confused by Google’s security and other warnings, can’t effectively influence websites with “poor” security to make security improvements, and have no alternatives to accessing those sites in any case.
These are real people — I believe many millions of them — and I do not believe that Google really understands how important they are and how Google is leaving them behind.
Today brought yet another illustrative example that yes, even confused me for a time.
It involves cat food.
A friend forwarded me an email from PetSmart that included a link for an individualized 30% off coupon that they intended to use to buy cat food. That’s a damned good coupon, especially for those of us who aren’t rolling in dough. I wish I had a coupon like that today for Leela the Siamese Snowshoe.
The concern with this email was that every time the user clicked on the link in Gmail to access the site where the coupon could be printed, Gmail popped a modal security warning:
“Suspicious link – This link leads to an untrusted site. Are you sure you want to proceed to click email-petsmart.com?”
You can see a screenshot at the bottom of this post.
The obvious questions: What the hell does “suspicious link” mean in this context? What does Google mean by “untrusted site” in this scope?
There are no links to explanations, and if you Google around you can find lots of people asking similar questions about this class of Gmail warning, but no definitive answers, just lots of (mostly uninformed) speculation.
So I spent about 15 minutes digging this one down. Is email-petsmart.com a phishing domain targeting PetSmart users? Apparently not. It’s registered to ExactTarget, Inc. and has been registered since 2012. So while there’s no obvious authoritative mention of PetSmart there, my experience leads me to believe that they’re most likely a legit marketing partner of PetSmart, providing those emails and coupon services.
Of course, I still have no information about why Google is tagging them as suspicious. Is it the lack of https: security on the URL? Is it some aspect of their email-petsmart naming schema?
Damned if I know. Google isn’t telling me. And how would the average non-techie be expected to unravel any of this?
I told the user to go ahead and click the link. They got their coupon. Their kitties should be happy.
I’m not happy.
In the real world, most users don’t understand this stuff at the level they need to make truly informed decisions. So they’re forced — simply to get on with their lives every day — to click through such warnings blindly, to get to where they need to go.
And make no mistake about it, these kinds of scenarios are teaching these users absolutely abysmal security habits.
Google is terrific at tech. But Google is still struggling when it comes to understanding the broad range of their users and those users’ needs — particularly the non-techies — and especially how to communicate with those users effectively.
Google can do much better.
Within hours of the terrible terrorist attack in Manchester earlier this week, UK politicians were already using the tragedy as a springboard to push their demands that Internet firms cripple their encryption systems and deploy a range of other Orwellian measures that would vastly weaken the privacy and security of honest citizens — while handing terrorists and other criminals the keys to our private lives, including our financial and other personal information.
This same thuggish mindset is taking root in other parts of the world, often combined with hypocritical “data localization” requirements designed to make individual nations’ citizens as vulnerable as possible to domestic surveillance operations.
There are basically four ways in which firms can react to these draconian government demands.
They could simply comply, putting their users at enormous and escalating risk, not only from government abuse but also from criminals who would exploit the resulting weak encryption environments (while using “unapproved” strong encryption to protect their own criminal activities). We could expect some firms to go this route in an effort to protect their financial bottoms lines, but from an ethical and user trust standpoint this choice is devastating.
Firms could refuse to comply. Litigation might delay the required implementation of crippled encryption, or change some of its parameters. But in the final analysis, these firms must obey national laws where they operate, or else face dramatic fines and other serious sanctions. Not all firms would have the financial ability to engage in this kind of battle — especially given the very long odds of success.
Of course, firms could indeed choose to withdraw from such markets, perhaps in conjunction with geoblocking of domestic users in those countries to meet government prohibitions against strong encryption. Pretty awful prospects.
There is another possibility though — that I’ll admit up front would be highly controversial. Rather than crippling those designated encryption systems in those countries under government orders, firms could choose to disable those encryption systems entirely!
I know that this sounds counterintuitive, but please hang with me for a few minutes!
In this context we’re talking mainly about social media systems where (at least currently) there are no government requirements that messages and postings be encrypted at all. For example, we’re not speaking here of financial or medical sites that might routinely have their own encryption requirements mandated by law (and frankly, where governments usually already have ways of getting at that data).
What governments want now is the ability to spy on our personal Internet communications, in much the same manner as they’ve long spied on traditional telephone voice communications.
An axiom of encryption is that in most situations, weak encryption can be much worse for users than no encryption at all! This may seem paradoxical, but think about it. If you know that you don’t have any encryption at all, you’re far more likely to take care in what you’re transmitting through those channels, since you know that they’re vulnerable to spying. If you believe that you’re protected by encryption, you’re more likely to speak freely.
But the worst case is if you believe that you’re protected by encryption but you really aren’t, because the encryption system is purposely weak and crippled. Users in this situation tend to keep communicating as if they were well protected, when in reality they are highly vulnerable.
Perhaps worse, this state of affairs permits governments to give lip service to the claim that they favor encryption — when in reality the crippled encryption that they permit is a horrific security and privacy farce.
So here’s the concept. If governments demand weak encryption, associated legal battles have ended, and firms still want to serve users in the affected countries, then those firms should consider entirely disabling message/posting encryption on those social media platforms in the context of those countries — and do so as visibly and loudly as possible.
This could get complicated quickly when considering messages/posts that involve multiple countries with and without encryption restrictions, but basically whenever user activities would involve nations with those restrictions, there should be warnings, banners, perhaps even some obnoxious modal pop-ups — to warn everyone involved that these communications are not encrypted — and to very clearly explain that this is the result of government actions against their own citizens.
Don’t let governments play fast and loose with this. Make sure that users in those countries — and users in other countries that communicate with them — are constantly reminded of what these governments have done to their own citizens.
Also, strong third-party encryption systems not under government controls would continue to be available, and efforts to make these integrate more easily with the large social media firms’ platforms should accelerate.
This is all nontrivial to accomplish and there are a variety of variations on the basic concept. But the goal should be to make it as difficult as possible for governments to mandate crippled encryption and then hypocritically encourage their citizens to keep communicating on these systems as if nothing whatever had changed.
We all want to fight terrorism. But government mandates weakening encryption are fundamentally flawed, in that over time they will not be effective at preventing evildoers from using strong encryption, but do serve to put all law-abiding citizens at enormous risk.
We must resist government efforts to paint crippled encryption targets on the backs of our loved ones, our broader societies, and ourselves.
Within hours of Google announcing their new “Store Sales Measurement” system, my inbox began filling with concerned queries. I held off responding on this until I could get additional information directly from Google. With that now in hand I feel comfortable in addressing this issue.
Executive Summary: I don’t see any realistic privacy problems with this new Google system.
In a nutshell, this program — similar in some respects to a program that Facebook has been operating for some time — provides data to advertisers that helps them determine the efficacy of their ads displayed via Google when purchases are not made online.
The crux of the problem is that an advertiser can usually determine when there are clicks on ads that ultimately convert to online purchases via those ads. But if ads are clicked and then purchases are made in stores, that information is routinely lost.
Our perception of advertising has always been complex — to call it love/hate would be a gross understatement. But the reality is that all of this stuff we use online has to be paid for somehow, even though we’ve come to expect most it to be free of direct charges.
And with the rise of ad blockers, advertisers are more concerned than ever that their ads are relevant and effective (and all else being equal, studies show that most of us prefer relevant ads to random ones).
Making this even more complicated is that the whole area of ad personalization is rife with misconceptions.
For example, the utterly false belief that Google sells the personal information of their users to advertisers continues to be widespread. But in fact, Google ad personalization takes place without providing any personal data to advertisers at all, and Google gives users essentially complete control over ad personalization (including the ability to disable it completely), via their comprehensive settings at:
Google’s new Store Sales Measurement system operates without Google obtaining individual users’ personal purchasing data. The system is double-blind and deals only with aggregated information about the value of total purchases. Google doesn’t learn who made a purchase, what was purchased, or the individual purchase prices.
Even though this system doesn’t involve sharing of individual users’ personal data, an obvious question I’ve been asked many times over the last couple of days is: “Where did I give permission for my purchase data to be involved in a program like this at all, even if it’s only in aggregated and unidentified forms?”
Frankly, that’s a question for the bank or other financial institution that issues your credit or debit card — they’re the ones that have written their own foundational privacy policies.
But my sense is that Google has bent over backwards to deploy their new system with additional layers of user privacy protections that go far beyond the typical privacy policies of those institutions themselves.
My bottom line on all this is that, yeah, I understand why many persons are feeling a bit nervous about this kind of system. But in the real world, we still need advertising to keep the Web going, and when a firm has jumped through the hoops as Google has done to increase the value of their advertising without negatively impacting user privacy in the process, I really don’t have any privacy or other associated concerns.
I only wish that all firms showed this degree of diligence.
Don’t hold your breath waiting for that.
Originally posted November 13, 2011
Around four decades ago or so, at the U.S. Defense Department funded ARPANET’s first site at UCLA — what would of course become the genesis of the global Internet — I spent a lot of time alone in the ARPANET computer room. I’d work frequently at terminals sandwiched between two large, noisy, minicomputers, a few feet from the first ARPANET router — Interface Message Processor (IMP) #1, which empowered the “blindingly fast” 56 Kb/s ARPANET backbone. Somewhere I have a photo of the famous “Robby the Robot” standing next to that nearly refrigerator-sized cabinet and its similarly-sized modem box.
I had a cubicle I shared elsewhere in the building where I also worked, but I kept serious hacker’s hours back then, preferring to work late into the night, and the isolation of the computer room was somehow enticing.
Even the muted roar of the equipment fans had its own allure, further cutting off the outside world (though likely not particularly good for one’s hearing in the long run).
Occasionally in the wee hours, I’d shut off the room’s harsh fluorescent lights for a minute or two, and watch the many blinking lights play across the equipment racks, often in synchronization with the pulsing and clicking sounds of the huge disk drives.
There was a sort of hypnotic magic in that encompassing, flickering darkness. One could sense the technological power, the future coiled up like a tight spring ready to unwind and energize many thousands of tomorrows.
But to be honest, there was little then to suggest that this stark room — in conjunction with similar rooms scattered across the country at that time — would trigger a revolution so vast and far-reaching that governments around the world, decades later, would cower in desperate efforts to leash it, to cage its power, to somehow turn back the clock to a time when communications were more firmly under the thumbs of the powers-that-be.
There were some clues. While it was intended that the ARPANET’s resource sharing capabilities would be the foundation of what we now call the “cloud,” the ARPANET was (somewhat to the consternation of various Defense Department overseers) very much a social space from the beginning.
Starting very early on, ARPANET communications began including all manner of personal discussions and interests, far beyond the narrow confines of “relevant” technical topics. A “wine tasting enthusiasts” mailing list triggered reprimands from DoD when it became publicly known thanks to a magazine article, and I won’t even delve here into the varied wonders of the “network hackers” and “mary hartman” mailing lists.
In fact, the now ubiquitous mailing list “digest” format was originally invented as a “temporary” expedient when “high volumes” of traffic (by standards of the time) threatened the orderly distribution of the science-fiction and fantasy oriented “sf-lovers” mailing list. Many other features that we take for granted today in email systems were created or enhanced largely in reaction to these sorts of early “social” communications on the very young Net.
The early ARPANET was mostly restricted to the U.S., but as international points began to come online the wonders expanded. I still remember the day I found myself in a “talk” (chat) link with a party at a military base in Norway — my first international live contact on the Net that I knew of. I remember thinking then that someday, AT&T was going to start getting concerned about all this.
The power of relatively unfiltered news was also becoming apparent back then. One of my projects involved processing newswire data (provided to me over the ARPANET on a friendly but “unofficial” basis from another site) and building applications to search that content and alert users (both textually and via a synthesized voice phone-calling system — one of my other pet projects) about items of interest.
For much of the Net’s existence, both phone companies and governments largely ignored (or at least downplayed) the ARPANET, even as it evolved toward the Internet of today.
AT&T and the other telcos had explicitly expressed disinterest early on, and even getting them to provide the necessary circuits had at times been a struggle. Governments didn’t really seem to be worried about an Internet “subculture” that was limited mostly to the military, academia, and a variety of “egghead” programmers variously in military uniforms and bell-bottoms, whether sporting crew cuts, scruffy longhairs, or somewhere in-between.
But with the fullness of time, the phone companies, cable companies, governments, and politicians galore came to most intensely pay attention to the Internet, as did the entertainment industry behemoths and a broad range of other “intellectual property” interests.
Their individual concerns actually vary widely at the detailed level, but in a broader context their goals are very much singular in focus.
They want to control the Internet. They want to control it utterly, completely, in every technologically possible detail (and it seems in various technically impossible ways as well).
The freedom of communications with which the Internet has empowered ordinary people — especially one-to-many communications that historically have been limited to governments and media empires themselves — is viewed as an existential threat to order, control, and profits — that is, to historical centers of power.
Outside of the “traditional” aspects of government control over their citizenries, another key element of the new attempts to control the Net are desperate longings by some parties to turn back the technological clock to a time when music, movies, plus other works could not so easily be duplicated and disseminated in both “authorized” and “unauthorized” fashions.
The effective fall of copyright in this context was preordained by human nature (we are physical animals, and the concept of non-physical “property” plays against our natures) and there’s been a relentless “march of bits” — with text, music, and movies entering the fray in turn as ever more data could be economically stored and transferred.
In their efforts to control people and protect profits, governments and associated industries (often in league with powerful Internet Service Providers — ISPs — who in some respects are admittedly caught in the middle), seem willing to impose draconian, ultimately fascist censorship, identification, and other controls on the Internet and its users, even extending into the basic hardware in our homes and offices.
I’ve invoked fascism in this analysis, and I do not do so lightly.
The attacks on fundamental freedoms to communicate that are represented by various government repression of the Internet around the world, and in the U.S. by hypocritical legislation like PROTECT IP and SOPA (E-PARASITE), are fundamentally fascist in nature, despite between wrapped in their various flags of national security, anti-piracy profit protection, motherhood, and apple pie.
Anyone or anything that is an enabler of communications not willingly conforming to this model are subject to attack by authorities from a variety of levels — with the targets ranging from individuals like you and me, to unbiased enablers of organic knowledge availability like Google.
For all the patriotic frosting, the attacks on the Internet are really attacks on what has become popularly known as the 99%, deployed by the 1% powers who are used to having their own way and claiming the largest chunks of the pie, regardless of how many ants (that’s us!) are stomped in the process.
This is not a matter of traditional political parties and alliances. In the U.S., Democrats and Republican legislators are equally culpable in these regards.
This is a matter of raw power that transcends other ideologies, of the desire of those in control to shackle the Internet to serve their bidding, while relegating free communications for everyone else to the dustbin of history.
It is very much our leaders telling us to sit down, shut up, and use the Internet only in the furtherance of their objectives — or else.
To me, these are the fundamental characteristics of a fascist world view, perhaps not in the traditional sense but clearly in the ultimate likely impacts.
The Internet is one of the most important tools ever created by mankind. It certainly ranks with the printing press, and arguably in terms of our common futures on this tiny planet perhaps even with fire.
The question is, are we ready and willing to fight for the Net as it should be in the name of civil rights and open communications? Or will we sit back compliantly, happily gobble down the occasional treats tossed in our direction, and watch as the Internet is perverted into a monstrous distortion to control speech and people alike, rather than enabling the spread of freedom.
Back in that noisy computer room so many years ago, I couldn’t imagine that I was surrounded by machines and systems that would one day lead to such a question, and to concerns of such import.
The blossoming we’ve seen of the Internet was not necessarily easy to predict back then. But the Internet’s fascist future is much more clear, unless we fight now — right now — to turn back the gathering evil.