February 28, 2012

The "Right to Be Forgotten": A Threat We Dare Not Forget

Imagine that you're conducting a Google Search for "Beatles" -- but mysteriously, the only results you receive are links to official, authorized pages for their song downloads and related corporate promotional materials. Or perhaps you enter a search for "Rick Santorum" -- but the search results are all links to pages on his official campaign website. Maybe a simple query regarding "John F. Kennedy" provides lots of details about his official presidential papers, but not a single link to information about his reported extramarital activities that we know about today. Or do we know about them? After all, if they're not in the search results, perhaps they don't exist. Right?

Wrong! But there are governments, individuals, and organizations who would very much like to make "disappearing search results" scenarios come true on a vast scale, creating a new form of censorship that would make previous censorship efforts -- on and off the Internet -- pale by comparison.

We know that search engines like Google are required to abide by valid, legal orders concerning the removal of search results. To date, these have been relatively limited in terms of scope, though there are already many egregious examples of what most of us would probably agree are inappropriate takedown orders.

The thankfully now moribund SOPA and PIPA legislation would likely have mandated the removal of search results involving the location of materials that the MPAA, RIAA, and other groups felt were illicit, and in the process would have done enormously wide damages across the Net.

But at least SOPA/PIPA supporters weren't trying to erase from memory even the existence of those songs and films! They obviously didn't want us to forget that The Beatles were a great group, or that "Citizen Kane" is a wonderful film.

Today though, governments around the world and allied pressure groups, especially in Europe but also elsewhere, even here in the U.S., are pushing a dangerous censorship concept much more akin to Stalin's alteration and censorship of photos than to even the controls envisioned by SOPA and PIPA.

Generally called the "right to be forgotten" (RTBF), this ultimately insidious concept embodies the view that governments, corporations, other organizations -- and individuals -- should have what amounts to absolute control over related publicly available information, especially in search engine results.

Note that our key words here are "publicly available." By and large we're not talking about private data, we're talking about information on public websites that particular individuals or other entities would prefer didn't show up in associated search results.

The theory seems simple. If you can dictate and micromanage (for example) Google Search results, you hope to prevent searchers from finding unfavorable information about you, whether true or false.

Various types of situations where the "right to be forgotten" has already been invoked includes people acquitted of crimes who had highly publicized trials, doctors trying to suppress complaints from upset patients, a resort unhappy that stories of a fire disaster from decades ago are still associated with their location, and (much more sympathetically) individuals who are the target of websites that were created to harass, libel, or injure them in other ways.

The focus on search engines in these regards is a consequence of several factors.

One issue is the reality that information on the Net, once available publicly, can be virtually impossible to actually remove, given the global availability of mirrors, archives, and other systems in various jurisdictions that can copy and preserve virtually any data.

So -- the thinking goes -- rather than try actually taking actions against the various websites that are the real publishers of the data in question, targeting search engines make for more of a "one stop shopping" regime, to try block people from finding the data even if it's still really out there.

Another factor is that the legal bar can be high in some countries for libel and other similar suits. So if governments can be convinced to anoint essentially everyone with the right to demand censorship of any search results that they feel relate unfavorably to themselves, that much lower burden could be widely exploited.

It doesn't take more than a few minutes of thought to see the utterly disastrous ramifications of the "right to be forgotten" approach, and the cascading damage to free speech that could easily spread malignantly across the global Internet as a result.

The crux of the matter is simple enough. Even if search engine results are selectively expunged on demand, the "upsetting" material in question will still likely exist on the Internet itself, still subject to being located by other means, including via sites that merely discuss related topics, situations, companies, or individuals.

This is a crucial point.

To be "forgotten" in the usual sense of the "right to be forgotten" proponents would typically end up requiring not only that direct references to sites containing "offending" materials be expunged from search results, but also links to any sites that so much as specifically (or in many cases even generally) discuss, critique, analyze, or otherwise mention the materials in question, or that even note the removal of the more direct links from search engine results themselves!

So a site that so much as says, "A controversy arose about whether or not Dr. Foo was providing quality health care, resulting in Google being ordered to remove links to sites operated by those patients," could itself trigger an order that demanded the removal of links mentioning the controversy involving Dr. Foo -- that he would prefer be "forgotten."

Like ripples spreading from rocks tossed into a pond, the range of directly and indirectly related sites whose links could be ordered censored from search engine results will tend to spread, multiply, and interact in complex ways, pulling ever more websites into "right to be forgotten" censorship demands.

And for all that damaging censorship and invasive restrictions on speech, the primary materials in question will likely still remain easily available. In fact, attempts to remove and censor information on the Net can paradoxically trigger even more attention -- the exact opposite of what had been intended -- through the so-called Streisand effect.

I do not accept as credible the claims of some "right to be forgotten" proponents that just getting the top, main related links out of Google Search results will be enough to satisfy them. It is inevitable that as the reality of the complex network graph associated with such information becomes obvious, calls for ever broader censorship orders, targeting results and links increasingly "distant" from the core sites, will be forthcoming in massive numbers, in a gigantic, nightmarish version of search results "Whac-A-Mole."

The only rational approach to the kinds of disputes invoked by "right to be forgotten" advocates should not be censorship, and should not be attempts to delete information and references to that information.

Rather, we should be concentrating on providing more information, more context in case of disputes, not less.

While the "right to be forgotten" may at first glance seem to have laudable goals, it is in truth an impractical and ultimately abusive concept, that cannot realistically accomplish its stated goals, but that would inevitably do enormous damage to the valid speech rights of an ever widening sphere of organizations and individuals around the world.

It is a dangerous chimera that would create vast new problems, not solve existing ones.

The right to be forgotten is a threat that we dare not forget, but that we should most soundly and totally reject.

--Lauren--

Posted by Lauren at 05:26 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 18, 2012

Google, Safari, and a Clamor of Cookie Confusion

Update (February 20, 2012): Microsoft newly attacks Google cookie policies, but history reveals otherwise!


A technological smoking gun is indeed present in this case. But it's not the gun being implied by confused headlines and the pronouncements of some commentators who appear to perhaps be out of their technical depths in this situation.

Thinking about it all these years later, I can't remember when I first ran across the term "cookie" in a computing sense. And offhand, the origins of this term as an "intermediate storage" element are somewhat hazy.

I do vividly recall that my first active entanglement with these babies was in the context of so-called "Magic Cookies" used by many early CRT data display terminals as a memory minimization technique -- to provide for character enhancement functions like blink, underline, bold, and so on. We Computer Science types have long been enamored of "magical" terminology - Magic Cookies, Magic Packets, Magic Words (e.g. "XYZZY" - "PLUGH"), and so on.

Even the "magic cookies" of CRTs were much maligned. Of course this wasn't really the cookies' fault. Memory was expensive and often minimal in these displays, and magic cookies actually used up one (or even more) spaces on the screen, making really clean layouts impossible. Display terminals that featured magic cookies were considered "terminally" brain dead by those of us in the know, and were typically assigned to the lowest ranking faculty, staff, and students. Some colorful disputes ensued.

Flash forward to the Web. The essentially "stateless" nature of basic HTTP transactions needed a mechanism to provided session-based coordination, and browser cookies stored on users' local computers quickly became the mechanism of choice to hold the intermediate data for this purpose.

As in the case of those magic cookies long ago, there is nothing inherently good or evil about Web cookies. They are simply local containers of data that can (subject to various rules) be written and read by Web sites.

But in the real world of the modern Web, the proper implementation of those "rules" by browsers and Web sites alike can become fiendishly complex.

OK, back to the current dramatic brouhaha over Google, Safari, cookies, and privacy. There's no way to deal with this accurately without getting somewhat technical, so please bear with me if you will.

Since the handling of browser cookies has long been complicated and controversial, all manner of methodologies to deal with them have emerged over the years.

At one time, I actively micromanaged virtually all of my browser cookies. But as Web systems became more intricate, such a detailed hands-on approach becomes decreasingly practical (these days I use browser extensions to maintain a relatively course control of cookies at the site level, but I would not recommend even this to most users).

One of the most common problems that Web users get themselves into is following simplistic advice about "blocking" cookies, and then becoming confused when they can no longer log into desired sites because the necessary session state cookies cannot be processed properly.

The proper handling of so-called "third-party cookies" by browsers and sites can be particularly challenging to implement. Such cookies are associated with domains other than that with which the user is primarily communicating at that moment.

Traditionally, browsers have accepted the reading and writing of third-party cookies by default, in some cases providing user controls for more fine-grained management of these cookies related to particular sites.

Third-party cookies have become controversial since they are sometimes viewed as being associated with "secretive" tracking practices. But there is nothing inherently wrong with third-party cookies. Like all browser cookies, it's what Web sites specifically do with them that matters, and especially with the rise of social sharing applications, third-party cookies can play important and utterly benign roles.

Now we reach that smoking gun of which I mentioned earlier.

Safari browser designers sometime back decided to diverge from common Web practice and block all third-party browser cookies by default.

The underlying rationales for this decision are not entirely clear and are a matter of some controversy. Even within the Safari developer groups themselves it's clear there was conflict about whether or not this actually was a useful, truly privacy-positive move.

But one thing quickly became clear. The default blocking would have the effect of breaking important functionalities on which many Web users depended.

Now, please permit me to introduce you to WebKit Bugzilla Bug 35824: Relax 3rd party cookie policy in certain cases, dating from March 2010.

WebKit is the common core implementation code used by Safari and various other browsers. Bug 35824 is at the heart of the entire Google/Safari cookie controversy.

Contrary to the assertions of some observers, Bug 35824 was not a leak involving third-party cookies being accepted inappropriately. It was not a loophole that needed to be closed.

In fact, it was exactly the opposite! Bug 35824 represented the realization that the existing WebKit implementation for third-party cookies, in conjunction with Safari's change to "no third-party cookies accepted by default" was too limiting, too closed, and needed to be loosened to restore key user functionalities.

The resolution of Bug 35824 involved doing just that, and the discussions associated with that Bug make for fascinating (and delightfully geeky) reading.

One particularly insightful quote from the associated dialogue:

- - -

"Alright, I'm regretting stepping into the morass that is third-party cookie blocking. The overarching problem is that third-party cookie blocking can't actually provide decent privacy benefits without breaking sites. We can machinate around the privacy / compatibility trade-off forever. Compatibility always has a stronger pull because you can see that XYZ works after you bolster compatibility whereas you don't see the privacy costs because they're harder to measure."

- - -

At the time, those discussions were most focused on problems that sites such as Facebook and Microsoft would have with the new Safari policy, before Bug 35824 was revolved. Google+ would not go public for more than another year.

But when Google+ did appear, Google quite appropriately used the provided mechanism of the 35824 bug fix, for key functionality related to Google+ on Safari browsers, in very much the same way intended for Microsoft, Facebook, and other sites.

It's at this juncture that the issue of unintended collateral effects comes into play.

As noted above, cookie handling can be very complex. Nowadays, traditional cookies have been joined by other (generally less well known) Web transactional local storage mechanisms, further complicating the picture.

The necessary loosening of Safari default third-party cookie controls associated with the 35824 bug fix even further convoluted the cookie handling process. This ultimately led to some cookies associated with Google's ad delivery network being mistakenly placed on some Safari users' browsers, in conflict with what those users might otherwise have expected from Safari's "no third-party cookies" default (keeping in mind that few Safari users would likely have had any inkling that there was already an exception to that seemingly declarative setting, via the 35824 fix).

The Google ad network cookies in question should not have been placed through the Safari browsers of users with that "third-party cookie blocking" setting. Those cookies were in error, and Google is in the process of removing them.

But those cookies did not contain personal information, nobody was harmed, nothing was damaged, and there is no indication that this event was purposeful subterfuge of any kind by Google.

There is an important lesson to be drawn from all this.

My gut feeling is that we've passed beyond the era where it made sense to concentrate on Internet privacy controls and issues mainly in terms of specific technologies as we've done in the past.

As noted above, cookies are neither good nor bad, neither intrinsically righteous nor evil. Cookies, like the other local storage mechanisms that have now been implemented, are merely tools. And as with other tools, how they are used is under the control of the entities who deploy these complex functionalities.

Ultimately, we expect Web sites to just work. It is unrealistic in the extreme to expect most users to understand and manage the underlying cookie and related systems of their browsers in detail. As new methodologies come online, this will only become ever more true.

What we really need to be concentrating on are the fundamental issues of trust and transparency.

If we as users feel confident that individual firms are doing their best to be transparent about their policies and are handling our data in responsible manners, then putting our trust (and data) in the hands of those firms is a solid bet.

Does this mean that mistakes won't be made and errors won't ever occur with the firms to whom we delegate these responsibilities?

Of course not. We're all merely humans, and true perfection is not within our current realm, nor is it likely ever to be.

But to assume that every error involving extraordinarily complicated software systems is evidence of evil intent is not only inaccurate and inappropriate, by to my way of thinking essentially perverse.

Unfortunately, the political environment in which we live today is replete with character assassinations and toxic "big lie" strategies. It is perhaps unfortunately unavoidable that such perverted approaches would seep into our considerations of highly technical topics as well. We must resist this.

When there are technical challenges we should meet them, when there are technical problems we should solve them. The intersection of technology with social policies is deep and becoming ever more entrenched with every passing day.

The accusatory rhetoric that has wrecked much of our political system cannot be allowed to substitute for reasoned and logical analysis of technical concerns, or the risks to society will be catastrophic.

Whether we're talking about browser cookies or nuclear weapons, the same underlying truth applies.

That's what I believe, anyway.

--Lauren--

Update (February 20, 2012): Microsoft newly attacks Google cookie policies, but history reveals otherwise!

Posted by Lauren at 12:54 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 09, 2012

EPIC's "Google Privacy Lawsuit" Against FTC Doesn't Hold Water

In the wake of Google's announced privacy policy changes and consolidations, which I discussed in considerable detail within Google's Privacy Policy Changes: Revolution? Evolution? Or Confusion?, now comes word that EPIC (the Electronic Privacy Information Center) has filed suit against the FTC (Federal Trade Commission), asserting that the FTC is not enforcing the terms of their 2011 consent decree with Google.

EPIC has done a lot of great work in the past, but of late seems to find claimed fault with virtually everything Google does.

What's really a head-scratcher in the case of this new suit is that you don't need to be a lawyer to question its veracity, you need only read over the relevant documents for yourself.

Nobody can reasonably claim that Google hasn't given plenty of notice about these changes. Between Google's associated blog postings, website notices, and email notifications on this issue, there arguably hasn't been so much global attention to an Internet-oriented policy topic for quite a long while.

The key focus of EPIC's lawsuit appears to be Google's plans to consolidate user data across various services associated with individual Google accounts.

But as I've previously noted, the consolidation of Google privacy policies can only reasonably be viewed as a positive for users, and an individual account is the logical unit for data consolidation as well, enhancing user services capabilities in significant ways.

Given that Google is not increasing the amount of data being collected or sharing personally identifiable user information with third parties, and since users can easily create multiple free Google accounts to separate their services usage if they really desire such compartmentalization, it's difficult to see what all the fuss is actually about.

In particular, the FTC consent decree with Google (relating to the launch of Google Buzz, a controversy that I've always felt was significantly overblown) includes this language:

“Third party” shall mean any individual or entity other than: (1) respondent; (2) a service provider of respondent that: (i) uses or receives covered information collected by or on behalf of respondent for and at the direction of the respondent and no other individual or entity, (ii) does not disclose the data, or any individually identifiable information derived from such data, to any individual or entity other than respondent, and (iii) does not use the data for any other purpose; or (3) any entity that uses covered information only as reasonably necessary: (i) to comply with applicable law, regulation, or legal process, (ii) to enforce respondent’s terms of use, or (iii) to detect, prevent, or mitigate fraud or security vulnerabilities.

EPIC appears to be claiming that the new Google privacy policy changes will somehow violate the third-party aspects of the consent decree.

This appears to be utterly erroneous. If I choose to use multiple Google services under a single Google account, I'm still just one party!

There's no "third party" involved if my Google searches are used to help tailor the ads I'm shown on YouTube, as well as on Google Search itself. It's all one account. It's me, myself, and I! Look in the mirror if you dare -- it's still the same person.

Google's privacy policy changes don't share my personal account data with other parties. They don't even share my data between separate Google accounts I can choose to use for different Google services if I wish.

You can reread the consent decree until you go cross-eyed, but EPIC's complaint still dissolves into the same sort of phantasm as a dream that fades from memory within moments of waking -- there's no real substance there at all.

I won't speculate about the motives behind the various parties spewing hateful hyperbole about all this, beyond saying that it's obvious that Google's competitors would love to see Google prevented from engaging in innovation whenever possible.

But from my standpoint, it's the users themselves who matter most. If Google were commingling personal data between separate Google user accounts, or providing personal data to actual third parties, there could indeed be cause for concern.

However, that's not what's happening, and users still have full control over how they use Google services -- with single accounts, multiple accounts, or for some services with no accounts at all.

And once again, the consolidation of more than 60 different privacy policies into just a few is a definite plus for users.

It is ultimately detrimental to the cause of genuine privacy concerns to view every change from the status quo as automatically negative. Such an approach tends to perpetuate the same sort of toxic environment that has enveloped so much of our public discourse, to no good end.

If nothing else, it would be extremely useful if we engaged in dialogues on these issues based on a foundation of facts, rather than emotional mischaracterizations.

Something to ponder perhaps, both regarding Internet issues, and in relation to the other aspects of our lives as well.

--Lauren--

Posted by Lauren at 12:20 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein