February 27, 2010

Microsoft Admits Trying to Influence EU Regulators in Google Anti-Trust Reviews -- Says "So What?"

Greetings. Microsoft has now admitted trying to influence EU regulators involved in anti-trust reviews of Google, and asserts that there's nothing wrong with their doing so.

But there are numerous flaws in attempting to equate Microsoft's anti-trust woes with Google's current situation.

Chief among them is that Microsoft used repressive and anticompetitive tactics in its march to PC and browser domination.

Google's rise in market share has been tied to a basic concept -- they've simply provided better products that more people want to use. Being big or even dominant when you've grown by playing fair and by the rules isn't a crime. It's when anticompetitive behavior is involved that the alarms go off. Microsoft attempted to effectively lock competitors out of the market through draconian licensing agreements and other means. On the other hand, Google's competitors have always been -- and still are -- only a mouse click away for virtually any user anywhere in the world.

And despite some critics' claims to the contrary, it's clear that Google goes to great lengths to try keep their organic search results as algorithmically "clean" and undistorted as possible. Is this process perfect? Of course not -- there's a continual stream of tweaks to the search rankings algorithms behind the scenes, but a laudable avoidance of modifying specific, individual search results rankings. As far as I can tell, the purported claims of unfair bias in Google natural search results are nothing but sour grapes.

--Lauren--

Update (February 28, 2010): This Wired article from a bit over a year ago -- "The Plot to Kill Google" -- is excellent related additional reading.

Posted by Lauren at 12:04 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 20, 2010

Google Buzz Experiments

Greetings. Responses to The Google Buzz Launch -- and the Limits of Downing Dogfood have been numerous and varied. Obviously, these controversies aren't going to vanish anytime soon.

Over on an (up to now "invisible") Google account, I've been testing a variety of Buzz and associated (e.g. Google Reader, Twitter) interactions. I've now opened that account to public participation and related discussions.

The account can be accessed via this Google Profile and can also be "followed" (by logged-in users) via that same page.

--Lauren--

Posted by Lauren at 07:21 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 15, 2010

The Google Buzz Launch -- and the Limits of Downing Dogfood

Greetings. There's an old Hollywood adage suggesting that most of the time, "any publicity is good publicity." When it comes to the launch of Google Buzz, there's definitely some truth to that saying -- the widely discussed privacy issues associated with the launch have yielded the product a significant global awareness far outside the world of current Gmail users. And reports are that usage of Buzz is (sorry, I can't resist) buzzin' along at a very significant clip.

Still, the very public privacy controversies regarding Buzz over the week since its debut (hard to believe it's only been a week) are both fascinating and instructive.

In "Google Buzz" -- and the Risks of "Automatic Friends" I noted my own concerns about specific features of the original Buzz start-up experience defaults, and expressed the hope that Google would reconsider those defaults.

I wrote that piece on launch day after my own initial experiments with the product. Between then and now Google has announced two sets of significant changes to Buzz that do a good job of addressing the issues that I noted.

But as seems to be the case with anything involving Google these days, one comments publicly at one's own risk. After I was widely quoted as praising the first round of Google's Buzz changes and noting that, "The thing hasn't been out a week, it's going to take some period to hash out." -- the volume of vitriolic "hate" e-mail that I received on the topic was both large and in some cases rather bizarre.

These missives fell into several categories. The "Google Conspiracy" set are always fun reading. In the case of Buzz, the theory seems to be that the initial default settings were part of a "secret plot" by Google to abuse users' e-mail contact lists and associated data. A glaring problem with that supposition is that there was nothing at all secret about the default followers policy that Buzz established. While many users may have not initially understood the full implications of the defaults, or alternatively (as in my case) may have felt that the defaults had some inherently risky characteristics or were problematic in other ways, the settings certainly weren't secret. It was clear from the onset what the model was for the "initial populating" of Buzz followers.

Another group of these correspondents complained that I shouldn't have praised Google for the changes they were making to Buzz, even though the changes were pretty much exactly what I had suggested would be useful. The implication of such "damned if you do and damned if you don't" logic is that unless a product is 100% correct right out of the starting gate, it deserves to be condemned to an inner circle of hell forever.

Frankly, I look at this from pretty much the opposite point of view. If you always play it totally safe in product design, for fear of making any mistakes, true innovation is slowed or in many cases even impossible. That Google erred in their initial design of the Buzz defaults is significant, but far more important to me is the extreme rapidity with which they publicly acknowledged these problems and have moved to fix them -- and word is that even more changes addressing various Buzz issues will be forthcoming very shortly.

But caustic communications within my inbox aside, one might still reasonably ask how Google apparently so significantly misread the likely reaction to the original Buzz defaults in the first place.

I don't have any inside information on this score, so like anyone else on the outside of Google I can only speculate. But it seems certain that Buzz was extensively tested within Google itself for a significant period before it was released to the public a week ago.

This sort of very wide (but still internal) testing of a product through actual use is commonly called "dogfooding" -- that is, "eating one's own dog food."

It's an excellent way to discover and hammer out technical deficiencies in a product, but can have significant limitations if the reaction of users within the "dogfooding" community leads to a less than fully accurate extrapolation to how the user population outside the confines of the firm itself will react.

The Google corporate culture is remarkably open on the inside, with a tremendous amount of information sharing among individuals and projects. It's easy to imagine how many enthusiastic, pre-public-launch Google users of Buzz might have inadvertently had something of a blind spot to the more "compartmented" nature of e-mail and "social messaging" communications that is much more the norm in the "outside world."

This highlights a key limitation of dogfooding, or even of testing involving non-corporate early adopters. If sample sets are not sufficiently large and especially broad in terms of different sorts of users in different kinds of situations, it's possible for internal enthusiasm to lead any engineering team to assumptions that may not necessarily be optimal for a released product facing a global user base.

Whether my speculation above does or doesn't resemble what actually occurred internally at Google related to Buzz, it is demonstrably true that to the extent we can formulate a product's design to anticipate and encompass the widest practicable range of user concerns and sensibilities, the lower the probability of launch missteps.

But even when such missteps do occur, the ability to react quickly, openly, decisively, and effectively to address resulting concerns is paramount, and Google's responses to the Buzz privacy controversies have been an excellent example of doing so in very much an exemplary fashion.

--Lauren--

Posted by Lauren at 11:37 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 14, 2010

Spying on User Web Browsing Histories for Fun and Profit!

Greetings. A bit over a year ago, I reported here about a commercial firm using JavaScript tricks to pry into the site browsing history of unsuspecting Web users, and I discussed the serious negative implications of such spying.

Now comes a handy "do it yourself" guide detailing the kinds of obnoxious techniques involved, under the name "Sniff browser history for improved user experience" -- a quintessential example of how to portray (that is, spin) an obvious privacy invasion as if it were a user-friendly value proposition.

It's not terribly surprising that the author of the piece devotes only a couple of words to even the possibility that such techniques could be used for "evil" purposes.

But what's perhaps even more nauseating is the pro-privacy-invasion fan-boy comments to his article, mostly drooling over the possibilities.

While the browser history voyeurism technique described is not without some inherent limitations, it is more than powerful enough to be abhorrent to almost anyone with even a modicum of ethical sensibilities.

Turning off JavaScript is simply not practical for most Web users these days, given the major dependence on JavaScript and AJAX technologies at the heart of so many major (and less than major) Web sites.

But I can't find any ethical loophole for the use of such browser history surveillance techniques in the absence of affirmative and fully-informed opt-in permission being given by users for such intrusions.

I have no gripes with systems that collect browsing history information when this behavior is appropriately disclosed and explicitly agreed to by users in a voluntary manner (e.g., as is the case with various special-purpose toolbar products).

However, when browser history collection isn't disclosed and permission for that collection is not voluntarily granted, "sniffing" of user browser histories is the textbook definition of spying -- plain and simple -- regardless of whether or not the Web site operator claims that they're using the information collected only for "good" purposes.

For some Web users, the information that could be revealed by the application of such techniques could have health, safety, and even perhaps national security implications (think about the browser histories of law enforcement personnel, for example).

I'm not a lawyer, but I would assert that such spying should be illegal -- if it isn't already a civil or criminal infraction in various locales.

At the very least, I'd welcome the readership's suggestions as to legal processes (notifications?) and/or technical methods to fight back against anyone attempting to deploy these browser history spying abominations. But please keep in mind the limitations of script blocking plugins (that I described in my earlier blog posting), and the impracticality of turning off all JavaScript for most users.

Any ideas?

--Lauren--

Update: I should note that the "Browser History Sniffing" article referred to above was originally published two years ago, but has been making the rounds again including on current syndication feeds. In any case, the issues discussed above are as valid now as they were one year or two years back. Most people need JavaScript and aren't going to hassle with JavaScript or CSS blocking plugins. Rapid browsing history deletion makes histories useless for most users -- I know that I don't want to give up the value I get from histories over significant periods of time. But ultimately, the big issue is why should people need to jump through hoops to protect themselves from such invasive practices that should not be acceptable or possible in the first place?

Posted by Lauren at 06:25 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 11, 2010

Who Owns Your PC? New Anti-Piracy Windows 7 Update "Phones Home" to Microsoft Every 90 Days

Greetings. Sometimes a seemingly small software update can usher in a whole new world. When Microsoft shortly pushes out a Windows 7 update with the reportedly innocuous title "Update for Microsoft Windows (KB971033)" -- it will be taking your Windows 7 system where it has never been before.

And it may not be a place where you want to go.

Imagine that you're sitting quietly in your living-room at your PC, perhaps watching YouTube. Suddenly, a pair of big, burly guys barge into your house and demand that you let them check your computer to make sure that it's "genuine" and not running pirated software. You protest that you bought it fair and square, but they're insistent -- so you give in and let them proceed.

Even though you insist that you bought your laptop from the retail computer store down the street many months ago, and didn't install any pirate software, the visitors declare that your computer "isn't genuine" according to their latest pirated systems lists, and they say that "while we'll let you keep using it, we're modified your system so that it will constantly nag in your face until you pay up for a legit system!" And they head out the door to drop in on the eBay-loving grandmother next door.

You then notice that the wallpaper on your PC has turned black, and these strange notifications keep popping up urging you to "come clean."

Ridiculous? Well, uh, actually no.

Microsoft most definitely has a valid interest in fighting the piracy of their products. It's a serious problem, with negative ramifications for Microsoft and its users.

But in my opinion, Microsoft is about to embark on a dramatic escalation of anti-piracy efforts that many consumers are likely to consider to be a serious and unwanted intrusion at the very least.

It's important for you to understand what Microsoft is going to do, what your options are, and why I am very concerned about their plans.

Back in June 2006, in a series of postings, I revealed how Microsoft was performing unannounced "phone home" operations over the Internet as part of their Windows Genuine Advantage authentication system for Windows XP. (The last in that series of postings describes Microsoft's reaction to the resulting controversy.) The surrounding circumstances even spawned a lawsuit against Microsoft, which coincidentally was recently dismissed by a judge.

But Microsoft has continued to push the anti-piracy envelope, now under the name Windows Activation Technologies (WAT).

This time around, to the company's credit (and many thanks to them for this!) Microsoft reached out to me starting several months ago for briefings and discussion about their plans for a major new WAT thrust -- on the basis, to which I agreed, that I not discuss it publicly until now.

The release of Windows 7 "Update for Microsoft Windows (KB971033)" will change the current activation and anti-piracy behavior of Windows 7 by triggering automatic "phone home" operations over the Internet to Microsoft servers, typically for now at intervals of around 90 days.

The purpose? To verify that you're not running a pirated copy of Windows, and to take various actions changing the behavior of your PC if the WAT system believes that you are not now properly authenticated and "genuine" -- even if up to that point in time it had been declaring you to be A-OK.

Note that I'm not talking about the one-time activation that you (or your PC manufacturer) performs on new Windows systems to authenticate them to Microsoft initially. I'm talking a procedure that would "check-in" your system with Microsoft at quarterly intervals, and that could take actions to significantly change your "user experience" whenever the authentication regime declares you to have fallen from grace.

These automatic queries will repeatedly -- apparently for as long as Windows is installed -- validate your Windows 7 system against Microsoft's latest database of pirated system signatures (currently including more than 70 activation exploits known to Microsoft).

If your system matches -- again even if up to that time (which could be months or even years since you obtained the system) it had been declared to be genuine -- then your system will be "downgraded" to "non-genuine" status until you take steps to obtain what Microsoft considers to be an authentic, validated, Windows 7 license. In some cases you might be able to get this for free if you can convince Microsoft that you were the victim of a scam -- but you'll have to show them proof. Otherwise, you'll need to pull out your wallet.

I'm told that the KB971033 update is scheduled to deploy to the manual downloading "Genuine Microsoft Software" site on February 16, and start pushing out automatically through the Windows Update environment on February 23. Blog Update 5:05 PM: This blog entry originally listed the KB number without the leading 9, since that was the way it was provided to me verbally and confirmed by Microsoft. They have now notified me that Update for Microsoft Windows (KB971033) is the actual designation, so I have made the appropriate change to the KB number throughout this posting.

The update will reportedly be tagged simply as an "Important" update. This means that if you use the Windows Update system, the update will be installed to your Windows 7 PC based on whatever settings you currently have engaged for that level of update -- it will not otherwise ask for specific permission to proceed with installation.

If your Windows Update settings are such that you manually install updates, you can choose to decline this particular update and you can also uninstall it later after installation -- without any negative effects per se. But don't assume that this will always "turn back the clock" in terms of the update's effects. More on this below.

Also, reportedly if the 90-day interval WAT piracy checking system "calls" are unable to connect to the Microsoft servers (or even if they are manually blocked from connecting, e.g. by firewall policies) there will reportedly be no ill effects.

However -- and this is very important -- if the update is installed and the authentication system then (after connecting with the associated Microsoft authentication servers at any point) decides that your system is not genuine, the "downgrading" that occurs will not be reversible by uninstalling the update afterward.

The WAT authentication system also includes various other features, such as the ability to automatically replace authentication/license related code on PCs if it decides that the official code has been tampered with (Microsoft rather euphemistically calls this procedure "self heal").

I've mentioned that Windows 7 systems will be "downgraded" to "non-genuine" status if they're flagged as suspected pirates. What does this mean?

Essentially, they'll behave the same way they would if they had failed to be authenticated and activated initially within the grace period after purchase.

Downgraded systems will still function much as usual fundamentally, but there will be some very significant (and very annoying) changes if your system has been designated non-genuine.

The background wallpaper will change to black. You can set it back to whatever you want, but once an hour or so it will reset again to black.

Various "nag" notifications will appear at intervals to "remind" you that your system has been tagged as a likely pirate and offering you the opportunity to "come clean" -- becoming authorized and legitimate by buying a new Windows 7 license. Some of these nags will be windows that appear at boot or login time, others will appear frequently (perhaps every 20 minutes or so) as main screen windows and taskbar popup notices.

Systems that are considered to be non-genuine also have only limited access to other Microsoft updates of any kind (e.g., access to high priority security updates, but not anything else, may be permitted).

And of course, under the new WAT regime you run the risk of being downgraded into this position at any time during the life of your PC.

In response to my specific queries about how downgraded systems (particularly unattended systems) would behave vis-a-vis existing application environments, Microsoft has said that they have taken considerable effort to avoid having the downgrade "nag system" interfere with the actual running of other applications, including stealing of windows' focus. It remains to be seen how well this aspect turns out in practice.

All of this brings us to a very basic question. Why would any PC owner -- honest or pirate -- voluntarily participate in such a continuing "phone home" authentication regime?

Obviously, knowledgeable pirates will avoid the whole thing like the plague any way that they can.

Microsoft's view, as explained to me and as primarily emphasized in their blog posting that will appear today announcing the WAT changes, is that honest Windows 7 users will want to know if their systems are running unauthentic copies of the operating system, since (Microsoft asserts and indeed is the case) those systems have a significant likelihood of also containing dangerous viruses or other potentially damaging illicit software that "ride" onto the PC along with the unauthentic copy of the OS.

But even if we assume that there's a noteworthy risk of infections on systems running pirated copies of Windows 7, the approach that Microsoft is now taking doesn't seem to make sense even for honest consumers.

If Microsoft's main concern were really just notifying users about "contaminated" systems, they could do so without triggering the non-genuine downgrading process and demands that the user purchase a new license (demands that will be extremely confusing to many users).

As I originally discussed in How Innocents Can Be Penalized by Windows Genuine Advantage, it's far more common than many people realize for completely innocent users to be running perfectly usable -- but not formally authenticated -- copies of Windows Operating Systems through no fault whatever of their own.

OK, let's review where we stand.

The new Microsoft WAT regime relies upon a series of autonomous "cradle to grave" authentication verification connections to a central and ever-expanding Microsoft piracy signature database, even in the absence of major hardware changes or other significant configuration alterations that might otherwise cause the OS or local applications to query the user for explicit permission to reauthenticate.

Microsoft will trigger forced downgrading to non-genuine status if they believe a Windows 7 system is potentially pirated based on their "phone home" checks that will occur at (for now) 90 day intervals during the entire life of Windows 7 on a given PC, even months or years after purchase.

That Microsoft has serious piracy problems, and has "limited" the PC downgrading process to black wallpaper, repeating nagging at users, and extremely constrained update access isn't the key point. Nor is the ostensibly "voluntary" nature of the update triggering these capabilities (I say ostensibly since almost certainly most users will have the update installed automatically and won't even realize what it means at the time).

The new Microsoft WAT update and its associated actions represent unacceptable intrusions into the usability of consumer products potentially long after the products have been purchased and have been previously declared to be genuine.

Microsoft is not entirely alone in such moves. For example, a major PC game manufacturer has apparently announced that their games will soon no longer run at all if you don't have an Internet connection to allow them to authenticate at each run.

Still, games and other applications are one thing, operating systems are something else altogether. And regardless of whether we're talking about games or Windows 7, it's unacceptable for consumers to be permanently shackled to manufacturers via lifetime authentication regimes -- particularly ones that can easily impact innocent parties -- that can degrade their ability to use the products that they've purchased in many cases months or even years earlier.

Fundamentally, for Microsoft to assert that they have the right to treat ordinary PC-using consumers in this manner -- declaring their systems to be non-genuine and downgrading them at any time -- is rather staggering.

Make no mistake about it, fighting software piracy is indeed important, but Microsoft seems to have lost touch with a vast swath of their loyal and honest users if the firm actually believes their new WAT anti-piracy monitoring system is an acceptable policy model.

My recommendations to persons who currently run or plan to run Windows 7 are simplicity themselves.

I recommend that you strongly consider rejecting the manual installation of the Windows Activation Technologies update KB971033, and do not permit Windows Update to install it (this will require that you not have your PC configured in update automatic installation mode, which has other ramifications -- so you may wish to consult a knowledgeable associate if you're not familiar with Windows Update configuration issues).

And if at some point in the future you find that the update has been installed and your PC is still running normally, remove the update as soon as possible.

While I certainly appreciate Microsoft's piracy problems, and the negative impact that these have both on the company and consumers, I believe that the approach represented by this kind of escalation on the part of Microsoft and others -- into what basically amounts to a perpetual anti-piracy surveillance regime embedded within already purchased consumer equipment -- is entirely unacceptable.

--Lauren--

Posted by Lauren at 09:01 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 09, 2010

"Google Buzz" -- and the Risks of "Automatic Friends"

Update (2/14/10): Google has already announced two sets of significant changes to Google Buzz in response to concerns such as those that I expressed in the posting below. I'm very pleased by the extremely rapid (all within less than a week) moves by Google to address these issues in a positive and direct manner. Since many readers have been asking me about this topic, I may have more to say on the subject in the near future.

Update (2/15/10): The Google Buzz Launch -- and the Limits of Downing Dogfood


Greetings. As you may have heard, Google has finally rolled out their integrated approach to social networking. Called Google Buzz (oddly, there's already a different sort of Yahoo! Buzz), this sort of service from Google was inevitable given the rise in social networking.

Whether or not the goal of Google Buzz (let's call it "Gbuzz" for now) is really to be a Twitter or Facebook "killer" as some observers have suggested, Google is doing a couple of key things very differently with Gbuzz -- one of them very positive, the other seemingly quite problematic.

First the good part. Following in Google's tradition of promoting open standards, Gbuzz has reportedly been created to be an open platform that will have API-based conduits for third-party apps. So all manner of interfaces can flower. Excellent.

Now for the not so excellent. Gbuzz, being tightly integrated with Gmail, apparently makes the implicit assumption that your frequent e-mail contacts should also automatically be declared as your "friends" for social update sharing purposes, and by default creates automatic "follow" lists on this basis.

Maybe this will work just fine for some people, but man, it might be just plain dangerous for others -- perhaps especially those persons who use a single Gmail account to communicate with both personal friends and business associates. Is routinely updating your business acquaintances with the same information as your personal contacts typically appropriate? Doubtful.

To be sure, you can manually drop specific Gbuzz "friends" from your list. Well, sort of. I didn't see obvious analogues in Gbuzz for Twitter's "block" or "lock" functions, and there are a number of mysterious "no profile" anonymous "followers" in Gbuzz that I seem to have on Day Zero -- and who I can't seem to identify or delete in any way. Who are they? I don't know! Hmm.

Of primary concern of course is the risk that users will inappropriately share specific information in compromising, embarrassing, or perhaps even hazardous ways, by not being fully cognizant of whom they're actually sharing with at any particular time. The Google Reader/Google Chat sharing assumptions have already been known to cause some users problems, and the Gbuzz tie-in to Gmail would appear to expand the universe of potential similar issues extensively.

There are counter-arguments. Google's sharing options are off unless you activate them, and you're under no obligation to actually use Gbuzz no matter how much you use Gmail. And it could be argued that people who want to share should be diligent about pruning their friend lists -- especially automatically created friend lists!

But overall, my gut feeling is that, however much Google wanted to encourage social networking within their product mix, the default algorithm for friends selection in Google Buzz is wrong.

There should be a much more aggressive procedure to ensure that users have vetted each "automatic friend" that Gbuzz adds to sharing lists. Without affirmative approval from users (unless they specifically choose to waive such confirmations) users' individual e-mail correspondents should probably not be added to friend lists without specific approval in each individual case.

As I've said many times before, defaults really do matter. I hope that Google will reconsider the defaults that apparently are currently implemented in Google Buzz.

--Lauren--

Posted by Lauren at 01:39 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 05, 2010

The FBI Wants Access to Your Web Browsing Records

Greetings. For years I've talked about the bizarre conflict between calls to rapidly delete or anonymize data that could be used for abusive tracking of Internet users, vs. calls from other quarters -- mostly in law enforcement -- for extended retention of such data.

Sometimes different divisions of the same governments are pulling on opposite ends of this particular issue.

So at the same time that Google, for example, has made excellent strides in limiting the retention periods for non-anonymized tracking data (such as IP addresses), we see pressures rising from police agencies pushing in exactly the opposite direction.

Now this conflict has become even more explicit, with word that the FBI has been pressuring ISPs to maintain two years of user Web browsing data -- something that -- to the ISPs' credit -- no major U.S. ISP is thought to be currently doing.

Similar pressures -- including calls for explicit laws to require such retention -- have also been spewing forth from other law-enforcement-related organizations for quite some time, with the usual claim that c-porn investigations (somehow this usually seems to be listed ahead of terrorism concerns) justify the creation of a massive Internet activity records surveillance regime.

Right now the focus appears to be on origin and destination IP addresses, which ISPs can easily capture on any direct connection (including https: encrypted connections), to the extent that proxies are not in use.

But a bit of mental exploration illuminates why the proponents of mass Internet data retention will never be satisfied with IP addresses alone.

Let's think about why.

First, most Web sites are actually "virtual hosts" -- meaning that hundreds, thousands, or even more individual Web sites may be served on the same destination IP addresses.

For surveillance records to be useful, it is certain that authorities would want to know exactly which sites, and in many cases ideally which specific URLs, were being accessed.

Unless deep packet inspection (DPI) were employed to spy on unencrypted traffic (or sophisticated man-in-the-middle techniques were attempted against encrypted traffic when practicable) the obvious means to determine specific site and URL information would be from server-side logs.

That is, authorities would need to go to the operators of the Web servers in question and request or demand the logs that showed which sites had been accessed at particular times. These same logs would typically provide URL information as well.

Combine this with ISP-provided source and destination IP address data, and ISP mappings of which subscribers were assigned to particular dynamic IP addresses at any given point in time, and you have everything you need to reduce the privacy of typical Web browsing to the level of postcards on parade. So passing ISP data retention laws or otherwise strong-arming ISPs into maintaining the data of interest won't do the trick alone -- you need to force every public Web site to similarly maintain log data and make it available to authorities on demand.

But wait a minute. We know that simple IP addresses can't themselves be relied upon to pinpoint individuals, even in the same household. And wouldn't people who didn't want to be tracked learn to rely on proxies, public Internet access points in libraries and coffee shops and ...

Hmm. How to box in those freedom-loving would-be criminal types?

Perhaps that's where Microsoft's Craig Mundie, who as I noted a few days ago is pushing for an Internet "Driver's" License, can help achieve a totality of Internet surveillance nirvana.

Any sort of "Internet User License" concept would be fraught with many more technical and infrastructural complexities than the "simple" data retention requirements discussed above, and would also be subject to various workarounds by the savvy.

But some relatively definitive means to identify individuals as opposed to only identifying Internet connections themselves would seem to be an ultimate Internet surveillance requirement, as anonymous Internet usage would increasingly undermine the ability of retained Internet connection records to provide the necessary raw meat for the sorts of surveillance society activities that are being propagandized as necessary for society's survival.

Internet surveillance proponents will attempt to claim that -- at least for now -- all that they really want is the Internet equivalent of called telephone number records.

Don't you believe it. The Internet has become integral to virtually every aspect of our lives. The spread of Cloud Computing -- a technology with enormous positive potential if appropriately managed and protected -- will further wed us all to distant servers.

The Internet sites and URLs that we visit, and the associated data that we send and receive, can reveal everything from the day-to-day trivia of our lives to our deepest passions and fears. Our personal, economic, political, and virtually every other aspect of our existence can increasingly be directly or indirectly discerned from the pulsing of our broadband connections.

The ability of Internet users to confidently trust the organizations and instrumentation of the Internet, everything from ISPs to Web services themselves, is not only a matter of faith in those specific entities' own veracities, but also a question of knowing that those enterprises will not be corrupted, blackmailed, or otherwise forced into the role of surveillance operatives at the behest or demand of potentially well-meaning, but still overzealous law enforcement paradigms.

Crime, terrorism, and the other evils of society are dark enough specters without attempts to control them shunting us into a different sort of nightmare.

Benjamin Franklin's now oft-quoted admonition that, "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety" has never been more relevant.

In the calls for steps toward a Surveillance Internet, we can hear the echos of past governments who promised their citizens law and order, and in the process marched them down the path of good intentions directly into figurative Hells on Earth.

We won't be fooled again.

Will we?

--Lauren--

Posted by Lauren at 09:01 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 04, 2010

Google (and Lauren) Meet NSA

Greetings. I woke up this morning to find my inbox flooded with concerned notes regarding a reported agreement being negotiated between Google and NSA - the National Security Agency ( [1] and [2] ).

The general trend of the messages, mostly from the same people who routinely treat me to rather paranoid anti-Google tirades, was largely along the lines of, "Here's another reason not to trust big, bad Google with our data."

I have no information beyond what has been published publicly regarding either this reported agreement or the Chinese-based attacks that are apparently the direct catalyst for the exploration of such an arrangement.

But I can explain why I'm not particularly concerned about this "partnership," so long as Google is being sufficiently careful and compartmented -- which I strongly suspect they are.

Older generations of NSA operatives are no doubt somewhat bemused by the openness with which the agency is discussed these days. Years ago, the official existence of "No Such Agency" was purposely kept so publicly nebulous that conference attendees from the agency routinely wore name tags only identifying their organization as "Department of Defense."

My first direct contact with NSA occurred many moons ago. I was sitting at a rather rickety CRT display in the UCLA ARPANET computer room, hacking at Unix OS code. A coworker popped his head into the noisy room, and announced that "two guys from NSA have shown up and want to speak to you."

Hmm. A quick mental review didn't reveal any recent felonies that might be of particular interest to the pair, so I popped out into the quiet of the "Boelter Hall" basement hall.

And sure enough, there awaited a couple of polite young men in dark suits holding notepads. Fascinating.

As it turned out, they had come to ask for software advice. At that point in time, before the widespread availability of terminal independent programming libraries like "termcap" and "termlib," I was something of the point man for ports of a particular Unix application to different terminal environments.

The NSA team wanted to talk about that application and some of the related porting issues -- and we had a nice chat. I wondered at the time why they hadn't just called or sent an e-mail -- I was LAUREN@UCLA-SECURITY back then and easy enough to reach. But maybe it was like the "hovercraft" guy in the current Orbitz commercials, who flies around hand-delivering refund checks because, what the hell, "We have a hovercraft!"

Years later, I discovered that NSA had become interested in my experiments with Unix-based newswire data collection and indexing, but that's another story.

The above was a long way of saying that NSA is both a premiere R&D institution and a signals intelligence (SIGINT) data collection and analysis organization.

That various serious abuses both long past and quite recent (at least the ones we know about that have come to public light) have occurred in the latter aspect of NSA is well documented -- James Bamford is the recommended starting point for interested readers new to the NSA sagas.

Yet it's undeniable that NSA represents the nation's most concentrated resource relating to cryptography and what now seems to be popularly called anti-cyberterrorism.

Controversies associated with NSA's involvements even in these regards have certainly been recurring facts of life -- NSA roles in the development of cryptosystems such as DES and AES are well-known examples. Recent over-enthusiasm by some members of Congress for proposals to establish direct NSA involvement in the day to day aspects of Internet security have justifiably raised significant privacy and other concerns.

But the fact still remains that the expertise represented by NSA in the computer security field is unparalleled in key contexts, and it is utterly reasonable that Google (and other technology firms) would consider carefully structured associations with NSA in the existing environment.

The devil is in the details, naturally. But Google knows that the continued patronage of their users is integrally associated with those users feeling confident that their data is safe from abuse.

I cannot visualize a circumstance under which Google would voluntarily agree to any partnership with NSA that could possibly marginalize or jeopardize that confidence. Of course -- and speaking only theoretically -- if Google were forced by governments to involuntarily cooperate with privacy-invasive schemes, we'd be faced with a whole different class of serious problems way outside the scope of the current discussion, and with far-reaching consequences for our democracy. But (based on all available evidence, one hopes) that's not where we are today.

It would however be extremely useful for Google to make as much information as possible publicly available regarding any association with NSA. At least the outlines of any data sharing arrangements should be announceable without negatively impacting operational effectiveness. A sustained lack of information in this regard tends to fuel the kinds of conspiracy-focused rumors that just love a vacuum.

NSA is perhaps a quintessential example of a government agency that exists as a double-edged sword. Properly directly and harnessed, its resources for our positive protection are vast. But if "running amok," NSA possesses at least equal potential for civil liberties abuses on a massive scale.

It makes perfect sense for Google -- like various other firms -- to work with NSA towards a better understanding and preventing of cyberattacks, so long as sufficient NSA isolation from Google user data is guaranteed.

But to use the vernacular, when dancing with Godzilla, it's always a really good idea to plan out your steps very, very carefully in advance -- for you never, ever want to find yourself underfoot!

--Lauren--

Posted by Lauren at 10:42 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 02, 2010

A Family's Horror -- and the Role of Google Images

Greetings. I'm about to pose some difficult questions. I won't assert that I know the answers to them all or even suggest that succinct answers are possible. But the questions themselves cut to the heart of some of the most contentious and emotional ethical issues of the Internet today.

A California appeals court has just unanimously ruled that a lawsuit may move forward against the California Highway Patrol, related to horrific imagery of an 18-year-old girl decapitated in a traffic accident. The photos were allegedly forwarded by one or more on-scene CHP officers to another party, and then spread widely across the Internet.

The victim's family has been trying for years to hold the CHP responsible for the dissemination of these images, and to somehow reduce the impact and exploitation of these nightmarish photos and the associated hateful abuse that has spread across the Net. Many of the sites exploiting these images attempt to portray themselves as "educational" in nature -- but in reality most are merely purveyors of what the film industry calls "torture porn" -- but in this case they're dealing with the horrific death of a real person, not fictional characters and special effects.

Regular readers know that I'm firmly opposed to censorship and have praised Google's recent commitment to cease censorship of Google search results in China.

I have also suggested in the past that some sort of "dispute resolution" mechanism -- to deal with unusual or exceptional situations triggered by search engine results -- would be worthy of both consideration and debate. If you have a few minutes to spare, here is a pointer to some discussion of this issue.

So it's with some consternation that I consider the easy availability of the accident photos in question being facilitated via Google Images.

A simple search on the victim's name in Google Images yields seemingly endless copies of the exceedingly gruesome photos, even when Google SafeSearch is set to its most strict setting.

Let's be very clear. I'm not suggesting that the photos be banned. And indeed, Google is merely indexing and archiving imagery that is by definition actually posted and hosted at external sites not under Google's control.

But even given these facts, would it be fair to say that Google has no role to play in the exploitation and monetization of these images, and in the continuing grief that they cause the victim's parents and other family members?

Again, Google isn't the creator or poster of the photos in question. But Google is almost certainly the primary mechanism through which the vast majority of persons discover and locate these images.

There are some relatively simple amelioratory steps that I'd suggest in this specific case.

Google could take a more proactive stance to avoid having such images being so openly displayed when not in completely unfiltered SafeSearch mode. My hunch is that flagging most of these specific accident photos as posted -- even on an ongoing basis (based on keywords and Google's advanced image analysis algorithms) -- would be relatively straightforward given Google's resources.

More broadly, this case brings into focus a class of issues representing extremely difficult ethical dilemmas that often aren't subject to improvement through engineering alone.

Censorship is not only dangerous but essentially impossible to completely enforce on the Internet. A single copy of a text or photo (or musical performance or feature film for that matter), posted on the Web is likely to publicly survive in some form into technological perpetuity. That's the reality, like it or not.

On the other hand, it can be argued that Google and other aggregators of indexing information and links do bear some ethical responsibility to try -- within the bounds of common sense, free speech, and technical practicality -- to help avoid the widespread dissemination of exceptionally hurtful and damaging materials in unfiltered search result contexts.

In other words, it really should not be so easy to stumble across photos of a decapitated 18-year-old girl when Google Image search results are in a strict filtering mode.

At the macro level, to say that dealing with such issues is a dilemma presenting major scaling challenges is a significant understatement. But as I've earlier noted, there are a wide variety of situations where the algorithmic precision of search engine rankings can do real and completely unwarranted harm to actual people.

Which brings us to perhaps the most important question associated with this entire topic. From both technical and ethical standpoints, can we honestly say that it's unreasonable or impossible to research and deploy steps that would help prevent thoughtless acts conducted over the course of a few minutes -- like the alleged sending of those accident photos by CHP officers -- from endlessly dragging other persons through a living hell?

Not censorship. Not a ban. Not new laws.

Rather, just doing a better job at further extending ethical considerations to search, in a fusion of software engineering and humanism.

If we instead choose to insist that this cannot be accomplished, we're eerily invoking the lyrics of Tom Lehrer's comedic critique of German/U.S. rocketry pioneer Wernher von Braun": " 'Once the rockets are up, who cares where they come down? That's not my department', says Wernher von Braun."

As Lehrer sang them, many years ago, the words were very funny indeed.

In the real world of the Internet, these ethical issues are both difficult and serious -- but I believe subject to reasonable and effective resolution, given the will to do so.

I can think of no organization better positioned and suited than Google to be in the vanguard of this important area. I trust that they are up to the challenge.

--Lauren--

Posted by Lauren at 09:17 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


February 01, 2010

Microsoft's Police State Vision? Exec Calls for Internet "Driver's Licenses"

Greetings. About a week ago, in Google and the Battle for the Soul of the Internet, I noted that:

Even here in the U.S., one of the most common Internet-related questions that I receive is also one of the most deeply disturbing: Why can't the U.S. require an Internet "driver's license" so that there would be no way (ostensibly) to do anything anonymously on the Net?

After I patiently explain why that would be a horrendous idea, based on basic principles of free speech as applied to the reality of the Internet -- most people who approached me with the "driver's license" concept seem satisfied with my take on the topic, but the fact that the question keeps coming up so frequently shows the depth of misplaced fears driven, ironically, by disinformation and the lack of accurate information.

So when someone who really should know better starts to push this sort of incredibly dangerous concept, it's time to bump up to orange alert at a minimum, and the trigger is no less than Craig Mundie, chief research and strategy officer for Microsoft.

At the World Economic Forum in Davos two days ago, Mundie explicitly called for an "Internet Driver's License": "If you want to drive a car you have to have a license to say that you are capable of driving a car, the car has to pass a test to say it is fit to drive and you have to have insurance."

When applied to the Internet, this is the kind of logic that must gladden the heart of China's rulers, where Microsoft has already announced their continuing, happy compliance with the country's human-rights-abusive censorship regime.

Dictators present and past would all appreciate the value of such a license -- let's call it an "IDL" -- by its ability to potentially provide all manner of benefits to current or would-be police states.

After all, a license implies a goal of absolute identification and zero anonymity -- extremely valuable when trying to track down undesirable political and other free speech uttering undesirables. And while the reality of Internet technology suggests that such identity regimes would be vulnerable to technological bypass and fascinating "joe job" identity-diversion schemes, criminal penalties for their use could be kept sufficiently draconian to assure that most of the population will be kowtowing compliantly.

I used the term "police state" in the text and title above, and I don't throw this concept around loosely.

The Internet has become integral to the most private and personal aspects of our lives -- health, commerce, and entertainment to name just a few on an ever expanding list. While there are clearly situations on the Internet where we want and/or need to be appropriately identified, there are many more where identification is not only unnecessary but could be incredibly intrusive and subject to enormous abuse.

And I might add, it is also inevitable that serious crooks would find ways around any Internet identification systems -- one obvious technique would be to divert blame to innocent parties through manipulation and theft of associated IDL identification credentials.

It was perhaps inevitable that the same "Hide! Here come the terrorists!" scare tactics used to promote easily thwarted naked airport scanners and domestic wiretapping operations, not to mention other PATRIOT and Homeland Security abuses, are now being repurposed in furtherance of gaining an iron grip on the communications technology -- the Internet -- that enables the truly free speech so terrifying to various governments around the world.

It's true that some persons advocating police state IDL concepts are not themselves in any way inherently evil -- they can for example be well-meaning but incredibly short-sighted.

However, I would be less than candid if I didn't admit that I'm disappointed, though not terribly surprised -- especially in light of Microsoft's explicit continuing support of Chinese censorship against human rights -- to hear a top Microsoft executive pushing a concept that is basic to making the Internet Police State a reality.

In the final analysis, evil is as evil does.

--Lauren--

Posted by Lauren at 03:57 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein