June 28, 2011

My Interview Re FTC Antitrust vs. Google & Supreme Court Video Games Decision ("C2C AM" Radio Last Night)

I had a great hour last night back on Coast to Coast AM radio, discussing both the FTC's just announced antitrust investigation of Google, and the Supreme Court's recent decision regarding California's banning of "violent" video game sales to minors, plus some additional topics.

The audio from that interview is available at:

Supreme Court Video Games Ruling & FTC Antitrust vs. Google ("Coast to Coast AM" Radio - 6/27/11)
(YouTube [Audio Only] / ~33 minutes)

The individual show segments are available as:

Supreme Court Video Games Decision

FTC Antitrust vs. Google

Listener Calls

The Wired article Coast to Coast AM Is No Wack Job is a useful backgrounder if you're not already familiar with the show.

Thanks.

--Lauren--

Posted by Lauren at 01:01 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 26, 2011

Google Ads, Antitrust, and Sour Grapes

A couple of days ago, in "J" is for Jealousy: FTC Investigating Google I suggested that the recently announced FTC antitrust investigation of Google is being largely driven essentially by jealous competitors, and that plenty of competition -- trivial to quickly access -- is available if Google is not your choice.

Today in part two on this topic, let's look at the issue that generated a number of angry "But what about Google Ads? That's where the money is!" retorts in my inbox.

Google's ad serving infrastructure is indeed by far the primary revenue source for the firm. Google pioneered the entire concept of automated Web ad auctions, placements, and associated search and keyword based ecosystems.

So let's explore some of the "popular" accusatory complaints regarding Google and advertising.

--- Complaint: "Google favors its own products in search results" ---

This actually breaks down into two areas -- paid ("Ad") results and natural ("organic") results.

When it comes to Ad placements, it seems completely reasonable that Google would wish to promote its other products and services. The last time I walked into a Home Depot or CVS, their promotions were all for their own array of goods and other offerings, not for competitors such as Lowe's or Walgreens!

So an ad text box at the top of specific, relevant search results noting that, for example, Google Maps is available, doesn't strike me as inappropriate or unfair, so long as other well-placed ad slots are available, and natural search results are honest and fair -- and Google makes an enormous effort to assure the legitimacy of their organic results, even in the face of continuous external "black hat" SEO (Search Engine Optimization) efforts by other firms to manipulate them.

And as far as natural results are concerned, how can we blame Google if their products organically rise to the top?

For example, I just did a simple, no Web History, not-logged-in query for "maps" on Google Search. The top natural result is Google Maps, followed by Yahoo! Maps and MapQuest, with no paid ad above at all. If I do the same search on Google News, the top result at the moment is a political story about North Carolina, with a paid Bing Maps ad above the natural results.

Now let's go over to Microsoft's Bing and do the same search. The organic results: Google Maps on top, then Yahoo! Maps, then a Bing Maps image of Santa Clarita, California (not where I am!), then MapQuest.

Notably, Google Maps is (at this moment) the top natural result for both Google Search and Bing Search. Since Bing seems unlikely to favor Google Maps for any nefarious reason, the conclusion seems clear that for the rankings right now, Google Maps comes out on top, even as determined by Google's main search competitor.

Google, Bing, and other major search engines work diligently to assure the veracity of their organic results. To not do so would be enormously risky and self-destructive. Claims that natural, organic results are inappropriately favoring Google (or Bing for that matter) are just patently unreasonable and untrue.

--- Complaint: "Google collects too much information from their ads and keeps it forever" ---

The nature of Web and Internet technology dictates that servers are provided with significant data related to user connections and activities. Google's services cover a wide range, and so Google does receive a great deal of data.

Of course, your ISP has access to every unencrypted byte that you send or receive, including commonly unsecured file transfers, P2P activities, Voice over IP calls, and everything else, including in most cases knowledge of every Web site and every URL that you visit.

The few giant, dominant U.S. ISPs have access to vastly more comprehensive data on most Internet users than any single other firm, including Google.

Ultimately the issue is, do you trust any given entity to handle your data appropriately? Finding ISP policies on deep packet inspection (DPI) and related data retention issues can be a challenge.

On the other hand, Google lays out very clearly how long they retain various kinds of data, and (in direct challenge to the "they keep everything forever" meme) their schedules for data anonymization and deletion, which generally seem to strike a good balance to both protect users and allow for reasonable use of data for security, quality assurance, and R&D purposes.

If you want something to worry about, spend some time thinking about governments' efforts around the world, including here in the U.S., to mandate non-anonymized data retention for "on demand" access by law enforcement and other agencies.

For that matter, it's certain large ISPs, not Google, that have been found in the past to be providing user data to various government organizations on a "nod and a wink" basis, while Google has openly battled overly broad and legally suspect government data demands.

Now, if someone is going to simply assert that Google is outright lying about how they handle data (which would be an incredibly stupid thing for Google to do, and Google people aren't stupid), I'll gladly point you at conspiracy-oriented Web sites that you might enjoy, explaining how the moon landings were faked and the concept of transistors was stolen from a crashed UFO. Happy paranoia.

--- Complaint: "Google Has a Web Ad Monopoly" ---

Google serves a lot of ads on a lot of sites. But a monopoly? Uh, no. In fact, even a casual look around the Web, including major sites of all sorts, shows an incredible array of dedicated and shared ad availabilities and ad networks that are totally unrelated to Google.

Google doesn't have a gun to anyone's head, forcing them to buy or use Google ads. The fact that so many Web sites and parties wishing to place ads have chosen to use the Google ad networks is a function of the perceived value of those ads and conscious decisions to not similarly patronize other ad systems and networks. If ad buyers chose to focus their advertising dollars elsewhere, Google competitors would grow even larger.

In other words, advertisers perceive the Google ad systems as providing the most "bang for the buck" and as being the most desirable, and specifically choose to buy Google ads tied to various Web sites and/or Google search results, instead of buying particular ads from the many available Google competitors who can also provide excellent advertising opportunities on a vast array of sites.

Then, because so many advertisers have made the same choice -- because everyone naturally wants to be top dog -- we hear loud "sour grapes" complaints since obviously not every advertiser can have their listings and ads at the apex of results for any given Google search.

Just as users can instantly switch between the Google and Bing search services, advertisers can similarly "vote with their dollars" and use non-Google ad services. Those Google competitors run enormous numbers of ads and they'll be happy to work with you. If you're concerned about Google having too large a chunk of the ad market, then support those alternatives.

I can't help but sense an undercurrent of greed in many of the complaints about Google that have led toward the current FTC investigation.

Some Google competitors are upset because users and advertisers have simply found Google's search, ads, email, and other services to be superior. Perfect? Of course not. On the scale that Google operates, even a tiny percentage of users having problems is going to be noticeable, and Google still needs very significant improvements in its user communications and support structures.

But to an extent unparalleled in the history of human knowledge and commerce, the ability of persons to easily and quickly choose among competing services, literally with a few quick finger movements on a keyboard and mouse, is a major part of what makes the Internet such a marvel.

The vast majority of Google users of all stripes are extremely happy with its services, and those who aren't can switch with ease.

The jealousies and sour grapes of Google competitors, other adversaries, and in many cases greedy advertisers, are the driving forces that appear to have been the primary triggers behind the FTC's new investigation of Google.

That this has occurred in today's toxic political environment is discouraging, but unfortunately not at all surprising. The weaponry of economic and political destruction is all too easily wielded now, with little concern for potential collateral damages.

In the case of FTC vs. Google, we will see if saner heads prevail in the end.

--Lauren--

Posted by Lauren at 11:40 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 25, 2011

DNS + DANE = Dumb, Dumber, Disaster - Or - How to Wreck Secure Internet Communications

In response to some comments I made earlier today regarding the limitations of DNSSEC, several people asked for my thoughts on a proposed extension to DNS called "DANE" (that effectively has DNSSEC as a prerequisite).

The idea of DANE is to use a "secure" DNSSEC environment to exchange the digital certificates required for secure host-to-host communications (that enable what is commonly called "SSL/TLS/https:" data transfers).

Oh yeah, DANE is just a, uh, "dandy" idea - IF your goals are the following:

1) Make virtually all common Internet secure communications dependent on the structurally obsolete DNS/DNSSEC model, thereby further entrenching the domain-industrial complex and the enrichment of its minions, by giving even more power to ICANN, registrars, and registries, etc.

2) Assure that the world's secure communications infrastructure (PKI) is easily and directly vulnerable to the same sorts of government overreaching and abuses that have characterized U.S. takedowns of domains around the world -- including vast numbers of innocent domains -- usually without significant due process, consultation, or adherence to the rights of either domestic or international domain owners.

So yes, if you enjoy watching the shenanigans of the current "DNS Mafia" and government malfeasance directed at the Domain Name System both domestically and internationally, and you want to see them anointed with more riches and puissance, you're gonna just love DANE.

Sign up now. Don't forget the cyanide-laced Kool-Aid for later.

--Lauren--

Posted by Lauren at 10:49 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 24, 2011

"J" is for Jealousy: FTC Investigating Google

Blog Update (June 26, 2011): Google Ads, Antitrust, and Sour Grapes



In a move anticipated for quite sometime, the U.S. Federal Trade Commission (FTC) has opened an "antitrust" investigation of Google, as noted on the Official Google Blog this morning.

Media coverage of this event has been fascinating, especially since text along the lines of "what Google's opponents and competitors have been hoping for" seems to be featured prominently in many reports.

Which yields a question. What is driving antitrust concerns in this case?

Comparisons with major technology antitrust actions of the past don't seem to provide many useful answers.

When AT&T was broken up decades ago (today now rapidly recreating its old power, but that's another story) Ma Bell was clearly a monopolist, and engaged in various unsavory behaviors in an attempt to preserve its position. AT&T owned the phones, the lines, the conduits, the tunnels, the buildings, the whole shebang. Alternatives for most people were simply nonexistent, and when alternatives did exist, they were extraordinarily expensive to access.

Antitrust actions against Microsoft many years later -- the firm most mentioned now in comparison to Google when discussing such investigations -- were focused on clearly anticompetitive behaviors by Microsoft, who used Windows' integration with PC hardware by manufacturers, and a tight coupling with its Internet Explorer browser, to almost totally dominate the PC marketplace in a manner that was both technically difficult and economically impractical for most users to escape.

These examples demonstrate a key aspect of traditional antitrust targets -- you need to be both big and bad. Organic growth alone, in the absence of anticompetitive behaviors, should not be enough to trigger serious antitrust penalties.

Of course, if you're a competitor who has failed to innovate fast enough to keep up with the market leader in the eyes of consumers, you may grasp at any excuse to try drag down your perceived corporate foe.

It's impossible to ignore the fact that such jealousies by Google competitors and other adversaries appear to have played major roles, directly or indirectly, in the FTC's newly announced action.

For years now, we've seen both direct and "astroturfed" attacks on Google that have been noticeable for their exaggerations and outright misrepresentations more than anything else. Some have just been lies, plain and simple.

A fundamental problem with attacking Google in this way (outside of poor ethics on the part of many attackers) is the ease with which Internet users can switch to Google competitors at any time.

In the heyday of Ma Bell, even if competitors had existed, the expense of changing hardware, new cabling and lines, and so forth, would have been extreme.

The vast majority of Microsoft's Windows users were not in a position to rip the Microsoft OS or Internet Explorer browser out of their PCs and install an entirely new OS, even if they had realized that alternatives existed. And since they had usually already paid for Windows -- either directly or as an amount "hidden" in the overall cost of their computers, they'd be flushing money down the john as well.

In contrast, most Google users pay nothing to use Google services. If they don't want to use Google Search, access to Microsoft's Bing is just a URL and click away. Don't like Gmail? You can easily switch to Yahoo. Looking for an alternative to Google Street View? You can be viewing Bing Maps in virtually less time than it takes to read this sentence. And Google has "data liberation" teams that work specifically to make it easy to export your data from Google if you want to take it elsewhere.

The anticompetitive, "lock in" characteristics that we normally associated with antitrust investigations and actions simply aren't present with Google.

So we end up back with "J" is for Jealousy -- with competitors who have been pushing the government to help them make up for their own inabilities to build systems and satisfy Internet users to the degree that Google has accomplished.

There are many people and organizations in this country that need and deserve the government's help right now -- for food, housing, health care, you name it.

But in an age when so many genuinely need help, it's difficult to reconcile attempts by Google's competitors to obtain what amounts to "corporate welfare" from the federal government in the guise of an antitrust investigation -- it just doesn't seem right.

Hopefully, the FTC will ultimately realize this as well.

--Lauren--

Blog Update (June 26, 2011): Google Ads, Antitrust, and Sour Grapes

Posted by Lauren at 01:53 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 19, 2011

The Shame on the Internet: ICANN Votes to Massively Enrich the Domain-Industrial Complex

As expected, ICANN has voted overwhelmingly to approve their disgraceful plan for a vast increase in generic top-level domains (gTLDs).

Some observers are expecting hundreds of millions of dollars to be spent quickly in the resulting environment, thanks to the associated "gold rush" and "buy protection for your brand" mentalities being explicitly promoted.

I suspect that is a lowball estimate. I believe we may see billions of dollars being wasted in ICANN's new gigantic gTLD "domain name space" -- mostly from firms falsely hoodwinked into thinking that new domain names will be their paths to Internet riches, and from firms trying to protect their names in this vastly expanded space, ripe for abuses.

This massive money flow will funnel overwhelmingly directly to the relatively few entities, mainly registries, registrars, and ICANN itself, at the top of the "domain-industrial complex" pyramid.

The negative impacts of this fiasco on ordinary consumers and Internet users will ultimately become all too clear, as the resulting effects of massively increased cybersquatting, spammers, and phishing take hold.

But apart from that, with the world still in the grips of an economic crisis that threatens to become desperately worse at any moment, the ethically vacuous nature of this entire plan is obvious.

Could all or part of that money just perhaps be used in better ways than for the creation and maintenance of an artificial "must buy whether you want it or not" form of "domain names" product -- that does absolutely nothing to advance or solve the many crucial technical, policy, blocking, neutrality, censorship, and free speech issues that are at the forefront of the Internet today -- a "product" that may actually exacerbate blocking and censorship?

Has the horrific economic saga of the last few years taught us nothing? Is there no sense of ethical or moral outrage among those persons who are truly concerned about creating the best possible future for the entire Internet and Internet community, not just for a comparatively few "domain exploitation" tycoons and would-be tycoons?

Do we care enough to consider alternative approaches? Or as usual, we will sit by and watch perhaps mankind's most important communications tool in history be further subverted for the benefit of the few?

We shall see.

--Lauren--

Posted by Lauren at 11:57 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 18, 2011

Social Media and the Face (Recognition) of a Riot

I'll admit it, I've never been a sports fan. Many, many moons ago, back in the 60s when Sandy Koufax was with the Dodgers here in L.A., the World Series actually managed to attract a wee bit of my attention. In 1984, when the Olympics came to town, I went out to observe the spectacle of the global media spotlight pass directly in front of me down the center of familiar streets.

That's about it.

So it's difficult for me to fathom the rationale for sports riots, especially when you're wrecking your own home town. Even more remarkable is the fact that these riots can occur both when local teams lose or win major games!

My suspicion is that much of the time the riots themselves are simply convenient excuses for looting and other mayhem, but I've never studied the topic, so perhaps there are deep psychological reasons for such abominable mob behavior, of which I'm unaware.

The recent riot in Vancouver, Canada following a Stanley Cup loss has now triggered a number of controversies relating to social media and facial recognition systems.

Word is that law enforcement personnel are using social media like Twitter and Facebook to try identify rioters, and now we've heard that The Insurance Corporation of B.C. is offering the use of its facial recognition software to try match looters via a database of driver's license and ID card photos.

Recently, in Weiner, Whiners, and Social: Public Means Public!, I suggested that attempting to restrict access to data that is already publicly online just doesn't usually make sense. In More on Google "Search by Image" and Facial Recognition Realities, I addressed some other related issues.

When it comes to which photos or data are reasonable fair game for authorities to use in situations such as that in Vancouver, it can help to break the matter down into several categories.

Photos and such that are already public seem like (if you'll excuse the sports metaphor) a slam dunk. This is clearly the case when the parties involved posted the images themselves. From a functional standpoint, if someone else posts relevant photos publicly, the bottom line seems pretty much the same. Public is public.

The legal situation seems hazier when photos are restricted to particular "friend" groups and similar circles. If authorities interested in seeing such photos request to become a "friend" for that reason alone and misrepresent themselves, the legal issues could become quite significant, and possible violations of sites' Terms of Service agreements would also come into play. On the other hand, since any "friend" with access to the photos could turn around and re-post them publicly (whether legally or not), what was once restricted could easily undergo a status change that might be difficult to fight in a practical sense once it's out there.

As always when it comes to social media, the best rule of thumb is not to post materials -- even to a restricted group -- that you'd be uncomfortable having go public if someone in your group purposely or accidentally leaked it.

And let's face it, sometimes there are photos that probably should not exist in the first place. If you think you're going to be concerned some day by that party shot of you tied naked to the bed next to a bottle of Woolite, perhaps the camera shouldn't have been brought out at all that night.

The controversy over the use of driver's license/ID databases in conjunction with facial recognition systems in Vancouver would seem to revolve almost entirely around the specific laws in place controlling access to and use of that specific data for broad identification scanning purposes. The taking and filing of these images are not "optional" in any normal sense for the population of Vancouver, and would hopefully have been gathered within a detailed context of legitimate uses.

As facial recognition technology improves, and authorities seek to employ these systems in much the same way they would treat crime scene fingerprints and fingerprint databases, it is crucial that all rules be strictly followed, rights respected, and citizens be made aware ahead of time if their facial imagery has morphed into the same law enforcement league as fingerprints or DNA.

Facial recognition systems haven't reached the level of accuracy generally associated with "physical contact" biometrics usually used by authorities today -- at least not yet. But some advanced facial recognition systems and related technologies are already considerably more accurate than most people realize, and they will be getting even better -- much better.

This entire spectrum of systems, including facial recognition, will be engendering an array of complexities that go far beyond the sports riots in Vancouver. We can be sure that these technologies -- all of which have legitimate, worthwhile uses -- will not be successfully suppressed. So we really do need to learn not only how to live with them, but how to use them wisely and appropriately. And the sooner that we can come to terms with that, the better.

--Lauren--

Posted by Lauren at 07:24 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 16, 2011

Why Data Anonymization is Often Preferable to Data Deletion

Earlier today in some other venues, I sent out a brief pointer to a new report from the Ontario, Canada Information & Privacy Commissioner, a study that explains how -- contrary to some recent memes that have received a lot of attention -- it is possible to anonymize data in ways that are not conducive to "re-identification" abuses.

Several readers have asked me why I consider this report to be important, recommended reading. "Why not just delete the data and be done with it?" they're essentially asking.

While it may seem neat and clean to just quickly delete data that has the theoretical potential to be misused, that really is far too simplistic an approach.

A primary way that we learn is by studying our own past. This applies in many aspects of life -- with the sort of data under discussion being only one example.

Web activity log data can be crucial to the forensic analysis of system errors, failures, illicit access events (and attempts) -- all of which themselves may have significant privacy-related implications. If we don't have enough detailed information to study, particularly in terms of event sequencing and interactions over time, solving such problems and protecting against future such events can be extremely difficult, in some cases perhaps impossible.

In the health field, longitudinal (long-term) studies need ways to analyze data in myriad forms and combinations, but obviously, we also want to protect patient privacy appropriately.

Search quality -- finding the things that we want on the Web -- is a rapidly evolving science and art, which would be hobbled in major ways if it were not possible to study the kinds of searches and search patterns in which users engage. Such a "data starved" state of affairs would be to the detriment of search service users in short order.

And these are just a few examples of why quickly disposing of data is in many cases impractical, undesirable, or both.

Fundamentally, to approach this area reasonably we need to consider retained data "life cycles" in context.

There are some situations -- such as an anonymous tip line -- where to operate legitimately no data regarding caller identities typically should be maintained at all.

But in most cases involving conventional Web services, the need to maintain completely intact data (e.g. server log records) progressively decreases with the passage of time, which suggests that an appropriate approach is a defined process of gradually anonymizing various data elements via suitable techniques and algorithms, while still maintaining for as long as possible enough structurally detailed intact and "hashed" data fields to permit continuing analysis and study for as long as possible.

This is in fact the way that many firms' data life-cycle retention policies do operate.

Having appropriate policies in place to deal with these issues is crucial of course, and needs to extend to longer-term backup and archival aspects as well.

Note, however, that various governments around the world, including increasingly the U.S., have a rather "schizoid" view of these issues, simultaneously pressing for companies to delete various user data, but also to retain much other data (in fully identifiable, non-anonymized forms) to be delivered on demand to government agencies for retrospective surveillance and analysis by law enforcement and intelligence operations.

So we see that, as usual, these are complicated matters, indeed.

But it does seem clear that there are many situations where appropriate, effective data anonymization is not only extremely useful for services and users alike, but obviously superior to simplistic calls for the rapid deletion of data in its entirety.

With the increasing evidence that reports of anonymization's "death" have been (as Mark Twain would have said) "greatly exaggerated," we can continue to move forward toward the best technical and policy approaches for handling retained data, that maximize that data's potential for improving our lives, while simultaneously minimizing the risks of it being abused.

--Lauren--

Posted by Lauren at 04:04 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 15, 2011

Can YouTube Save Democracy?

The stage was something out of a bad dream -- a gaudy combination of the center ring at a circus and the mothership from Close Encounters of the Third Kind. I kept expecting a small clown car to come whizzing on stage at any moment, and for a score of tiny gray aliens to rapidly emerge.

The seven participants arrayed along the platform spouted talking points from carefully memorized scripts, with nary a genuine argument between them, while their interrogator lobbed queries of national and global importance: "Elvis or Johnny Cash?" "Coke or Pepsi?" "Leno or Conan?" "Deep dish or thin crust?" At one point, the discourse deteriorated to an apparent contest between who could boast of having the most children (foster or otherwise).

And somewhere, the late Don Hewitt, who directed the first Kennedy/Nixon debate in 1960, was probably spinning in his grave.

CNN's GOP Presidential Debate last Monday evening was a study in the worst sort of coupling between politics and show business, an entertainment venue masquerading as a news program, the embodiment of Paddy Chayefsky's 1976 Network satire materialized as a sickening reality.

Of course, with seven presidential aspirants competing for air time during the show, the opportunities for real debate were safely suppressed from the word go. And after all, CNN (who spent the weekend prior to the program hawking the expense and complexity of the staging, even showing the construction repeatedly in time lapse), wanted to get the most entertainment impact bang for their bucks.

Post-debate pundits suggested that none of the participants had too badly screwed up, especially given the low expectations in place. Nobody started stammering in confusion, drooling was kept to a minimum, and fortuitously for the would-be presidents, nobody was asked the airspeed velocity of an unladen swallow -- or about favorite colors, for that matter (how did CNN debate moderator John King miss that last question, I wonder, given his other penetrating inquiries?)

In the end, as expected, the "debate" was more carnival sideshow than pre-presidential performance. The ratings were reasonably rad, but the value was virtually vacuous.

It seems ironic that at this point in the 21st century, we're still mainly relying on this archaic formula for learning about presidential candidates, especially this early in an election cycle where too many cooks on camera really do seriously spoil the broth.

Most candidates have now learned the value of services like YouTube to widely disseminate their own purpose-built "commercials" to potential voters.

Live versions of these services are now becoming mainstream. Devices like Google TV (and others) can easily stream such Internet-based programming to the same screens where CNN and other conventional channels would traditionally be viewed in households.

It would make enormous sense for us to devote considerable effort toward leveraging YouTube and other live Internet streaming systems toward the production of many more, but individually less "crowded," genuine debates, with politicians' feet held at least a bit more to the fire by the real-time feedback of viewers.

This would not be a panacea by any means. Most politicians can't resist the urge to try turn any camera or mic into an opportunity to parrot their party lines. But there exists at least the real possibility that the elimination of many time and expense issues associated with traditional television, would provide far more opportunities for a variety of truly substantive Internet-based "get to know the candidates" debates and other formats, with far less pressure for the sorts of theatrics that turned the CNN "debate" stage into a performance that the casual viewer might have mistaken for a Saturday Night Live sketch.

Our need for actual, meaningful knowledge regarding those persons who seek to lead this country is too important to be left at the hands of "style over substance" productions as exemplified by Monday's sordid CNN spectacle.

And who knows, with the range of opportunities that would be opened through the ever broadening use of YouTube toward the goal of a serious and engaged democratic process, there might even be time for those in-depth interviews where we could ultimately explore such critical queries as preferred pizza crusts, beloved colors, and even the velocity of those unladen swallows -- be they Democratic, Republican, African, or European!

There is still time to save democracy from the creatures in the clown cars.

--Lauren--

Posted by Lauren at 09:13 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 13, 2011

Web "Privacy Themes" Proposal: Reactions and More Info

A couple of days ago, in Web Privacy Is Obsolete! So Now What?, I proposed a framework for the simplification and optimization of Web user privacy preferences on Web sites -- a "Privacy Themes" structure (or more formally, "User Privacy Preference Themes" - UPPTs).

The concept of Privacy Themes is basically to create user-oriented "higher order" groupings and intelligent preference linkage "rules" for the myriad and complex privacy-related settings that increasingly are part and parcel of Web site use today.

The goal of this structure is to minimize users' misunderstanding of privacy preference settings, maximize users' satisfaction with privacy-related defaults and settings choices, and generally to broadly improve the user experience with Web sites within these contexts.

In all honestly, I brought up Policy Themes in that posting with a bit of trepidation. I've been working on details related to this concept for quite some time, and as I noted in the posting, my many detailed notes on this are not yet in shape for publication, nor am I in a position right now to spend the kind of time needed to bring them up to a suitable standard that I'd consider to be appropriate for publication.

I mention this because I was surprised by the high level of interest regarding my Saturday posting, and the large number of people asking for details regarding Privacy Themes, wanting to join a discussion mailing list (which does not at this time exist) and so on.

To everyone who wrote me about Privacy Themes, please consider this to be my apology for not responding to you all individually. My hope is that this approach may be of some use to the Internet community, and I'd like to "get it out there" as soon as possible in a useful form.

But given the current situation, I simply cannot deal with this appropriately right now in terms of posting all of the materials, or getting into the kinds of detailed technical and policy discussions related to this that I'd really love to do if other issues didn't have priority at the moment in my life.

I will attempt to individually answer specific questions related to Privacy Themes as circumstances permit. I would like to publicly post additional related materials and start an associated discussion mailing list, but these will have to wait for a while at least, given the status quo, anyway.

Thank you so much for your interest in this proposal. Take care, all.

--Lauren--

Posted by Lauren at 10:42 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 11, 2011

Web Privacy Is Obsolete! So Now What?

Blog Update (June 13, 2011): Web "Privacy Themes" Proposal: Reactions and More Info



"It's all too damn complicated!"

I can't begin to count the vast number of times that people -- and not just non-techies -- have made this comment to me regarding privacy on Web sites (with the word "damn" frequently replaced with significantly more colorful invectives).

Such bitter reactions are understandable. Most folks just want to get about their business of accessing Web sites and services, without feeling that a prerequisite for safe use is a prophylactic graduate course in privacy law -- notwithstanding sites that do make determined efforts to present privacy-related data (e.g. via "dashboards" and other formats) in a comprehensive manner.

Even when users fully understand the terminologies and principles involved, often tortuous and labyrinthine privacy preference settings can be the "salt in the wound" that causes many persons to throw up their arms in despair.

Faced with such situations, a common reaction is to either just accept the default privacy preferences as is, or, depending on personal proclivities, abandon the involved sites altogether.

Neither of these "all or nothing" reactions are good ones. Users who accept defaults that they later consider to be too "lax" regarding privacy are likely to be quite upset. Users who refuse to even use a site in the first place may be depriving themselves of services that they actually would have found valuable, perhaps in major ways.

Yet Internet privacy issues are complex by definition, and will continue to become increasingly convoluted as newer technologies like location-based services, face recognition systems, and who knows what else -- increasingly come broadly online.

This leads to users often having settings that do not accurately represent the privacy preferences that they had assumed were actually in place.

Recently, in Do-Not-Track, Doctor Who, and a Constellation of Confusion, I suggested that an accurate assessment of Web site privacy parameters actually entailed a multidimensional "constellation" of issues, and that most current ways of looking at Web privacy were actually far too simplistic.

But given that privacy settings today are already frequently far too complex and ephemeral from the users' standpoint, and subject to additions, removals, reorganizations, and other confusions with little or no advance notice to users, how can we possibly consider the necessary additional privacy aspects and interactions that will be key to a reasoned and balanced approach to privacy concerns moving forward?

Even viewed from the standpoint of today's status quo in this area, it's time to admit that the methods we're providing users to control their privacy preferences at most Web sites have become woefully inadequate and obsolete.

Worse yet, the sorts of solutions being touted by various government and other entities -- such as simplistic Do-Not-Track systems -- are virtually guaranteed to take the current situation and make it far worse in many ways.

Attempting to mandate such Do-Not-Track mechanisms to deal with privacy concerns is akin to destroying a beehive with a nuclear bomb. Not only will there be enormous and spreading collateral damage, but an entire range of useful and important attributes associated with behavioral targeting and other technologies will be indiscriminately obliterated in the process, to the ultimate detriment of Web users.

We can do much better.

As a starting point, we need to come to grips with the fact that facing users with a barrage of complex and often interrelated opt-in, opt-out, and other privacy preference settings will typically do more harm then good. As we've seen, users will tend to "tune out" options with too much complexity, with the strong potential for both users and services being dissatisfied with the results down the line.

But at least under the hood of Web services, the complex, multidimensional constellation of detailed settings will need to exist, to meet an increasing list of technical, legal, and policy requirements.

Is there a practical way to provide users with a more useful and accessible means of specifying their privacy preferences in most cases, while shielding them from the increasingly complex array of internal privacy-related settings, especially as these are augmented and change in other ways over time?

An approach that I feel is worth considering involves what I call User Privacy Preference Themes (UPPTs).

The idea is fundamentally straightforward.

Most of us tend to fall into a relatively small set of categories regarding our personal privacy concerns. Some of us are willing to broadly share information, including for example location data -- but only to our friends or other associates. Other persons are open to even broader sharing beyond such circles. And some persons would prefer to share as little data as possible, and want to stay as anonymous as is practicable.

I believe it is possible to create a "mapping" between these and other comparatively generic "personal privacy sensibility sets" regarding privacy issues, and use this analysis to create broad "privacy preference themes" -- that themselves can be used internally to select many detailed privacy settings -- based on the aspects of each individual theme itself.

In other words, if we know that someone has declared themselves to be a user of the "glad to share info with friends" theme, this knowledge can be employed to reasonably anticipate and control the settings of a large number of individual privacy-related parameters on a site for that user, and to make a reasonable judgment as to how this person would likely want their settings configured for new features that may later be deployed.

The same sort of process would hold true for users selecting other privacy preference themes as well.

Best practices would still necessitate that sites clearly notify users when significant privacy preference options have been extended with new features or otherwise altered, and users would naturally still have access to (and control of) all detailed privacy settings on demand.

But by starting from the baseline of a user's privacy preference theme choice -- their UPPT -- and using that as a guide for future individualized defaults as new privacy-related technologies augment the existing environments, users are likely to be far more satisfied. Their settings associated with these new capabilities will already likely be "in sync" with their historical preferences related to data sharing, behavioral targeting, and the many other aspects of sites that can be important both to users and to the functioning viability of Web services themselves.

Users stand to gain mightily from such an approach. User privacy preference themes could provide a means to help assure that individual privacy-related settings are optimally configured not only to protect data and functions as each specific user expects, but also to enable users' maximal engagement with those aspects of sites that they have chosen to access.

Unlike a complex array of detailed privacy settings that default the same way for everyone, or the "feature obliteration" doomsday approach of Do-Not-Track, individualized UPPTs could provide a framework for a highly customized approach to privacy preferences, capable of dealing with extremely complex preference constellations, without requiring users to manually analyze and manipulate the detailed settings incorporated within these environments, unless they prefer to do so.

Obviously, practical implementation of this concept would likely not be trivial -- but I believe that this approach is a practical one with potentially major benefits for both Web users and services. I have a pile additional details and thoughts on this that I'd be happy to share, though currently they are not in a suitable form for public posting.

We need to bite the bullet, and admit that while privacy issues are critical to the Web, our traditional approaches to dealing with this area are increasing frayed, tattered, and entangling users in a confusing mess rather than helping them.

Nor is cutting off our nose to spite our face, in the manner of Do-Not-Track, the best way to help users navigate privacy issues without potentially crippling many of the very services that they most wish to use.

Singing the same old songs regarding Web privacy may feel reassuring, but no longer is a practical path. Perhaps some new "themes" will help to get us back into tune with the best interests of Internet users and of the Web at large.

--Lauren--

Blog Update (June 13, 2011): Web "Privacy Themes" Proposal: Reactions and More Info

Posted by Lauren at 12:33 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 09, 2011

Apple Caves on DUI Apps: Free Speech Suffers, but Nobody Is Likely Safer

I can certainly see this one coming. This is one of those postings that guarantees an angry email barrage in short order. But for the moment the servers are still all running, the queues are in pretty good shape, and the hamsters are running full speed on the generator wheels. And these days I'm in even less of a mood to mince words than ever.

You'll recall that Congress has been figuratively beating up on Apple and Google lately, this time regarding smartphone apps that allow users to report or be warned regarding DUI (Driving Under Influence) checkpoints (that is, roadblocks).

When this issue initially came up recently, with a gaggle of U.S. Senators writing RIM, Apple, and Google asking them to ban such applications, RIM servilely complied immediately.

Google and Apple both refused, noting that while they would make efforts to remove illegal apps, they did not view DUI notification apps as illegal. And in fact, they are not illegal. Period.

Today comes word that Apple has apparently caved on this matter, and has changed its notoriously arbitrary App Store Review Guidelines to ban drunk driving/DUI applications, and will review the existing ones, presumably in preparation to retroactively ban them as well.

A chorus of politicians and tech columnists immediately sprang forth today to sing the praises of Apple for this strike against this particular designated evil of free speech.

For that is what we're really talking about. It is not illegal to report DUI roadblocks. It is not illegal to tell people where such police activity is occurring -- in the U.S., anyway. It is unlikely that any attempt to make such reporting illegal would easily withstand court scrutiny. Nor would a ban on such information be practical even if it was successfully enacted and then approved by the courts.

Attention in this sphere will now inevitably turn toward Google and Android apps. Since Google wisely does not operate a restrictive, approval-based App Store (in contrast to Apple), and since Android users can easily create, share, and "sideload" apps without using the Android app store at all, Google cannot effectively block apps before installation, and even their use of the Android app "kill switch" -- only activated in extreme cases to date -- can be circumvented by users in various ways.

But unlike many observers, I don't condemn this state of affairs regarding Android. Rather, I applaud it. When I buy a device I feel justified in demanding that I be allowed to run any legal application on it that I choose. For all practical purposes, Android provides this crucial capability. I can write my own Android apps, I can download them directly from Web sites. I take responsibility for my own use of my technological property.

This is of course not in keeping with the current "politically correct" ideology, that users of technology must be controlled in ever increasing detail by manufacturers, carriers, ISPs, and ultimately by government. "Users cannot be trusted," says this mantra -- they must be constrained to only use technology in the manners and circumstances that government ultimately deems fit to bless.

While the DUI apps case indeed has elements that invoke aspects of the upcoming battle over PROTECT IP and government-mandated search engine censorship, at this stage demands for the removal of these apps are more in the nefarious "nod and wink" zone.

That is, given that the government cannot order Google, Apple, or others to ban such apps providing legal information, Congress is trying to apply pressure through the "chokepoints" of major Internet firms, via a kind of "friendly persuasion" that would have seemed perhaps familiar to Al Capone (in this case, of course always with the unspoken threat that failure to comply might result in more intense Congressional scrutiny of these firms in other ways).

All too often today, law enforcement is attempting to block us from the legal noting, reporting, or recording of openly visible enforcement activities in public places -- note the sometimes violent reactions to citizens simply attempting to document police activities using camcorders or phones.

Efforts to prevent the public from reporting or accessing information regarding "sobriety checkpoint" roadblocks on public thoroughfares is yet another example of law enforcement and political overreaching.

Two ironies in all this immediately present themselves. First, there is no evidence I know of documenting that the use of smartphone DUI apps increases drinking, drunk driving, or accidents in any manner. Anecdotal evidence suggests that when many persons know that there are DUI checkpoints on their route, they either don't drink or find a designated driver who isn't drinking -- both highly positive outcomes for everyone involved.

But the other irony in all this is that even if smartphone apps providing this information were somehow 100% effectively banned, the availability of such data would be hardly reduced at all. The same data is available on vast numbers of conventional Web sites, via automated email, SMS text messages, and other means. Anyone with a bit of skill can throw together a script to leverage standard GPS data with such sites to provide very much the same capabilities as dedicated apps.

This really isn't about the scourge of drunk driving, a horror that nobody supports. Rather, we're talking not only about fundamental principles of free speech, but also regarding the inability of specific restrictions on speech to actually achieve their ostensible goals.

Given the realities of this situation, it is difficult to view the Congressional pressures being asserted as much more in the end than political posturing, even if we assume that the motives of the Senators involved are purely honorable ones.

Drunk driving has enormous costs for society, both in terms of lives and money.

But I would argue that attempts to control free speech, particularly efforts to prevent the dissemination of information regarding the open activities of law enforcement and other government officials in public places, ultimately carry enormously greater potential risks and costs to society, especially when such information restrictions cannot possibly achieve their stated goals.

--Lauren--

Posted by Lauren at 11:19 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 07, 2011

Internet High Tech Sells Out to AT&T/T-Mobile Merger - But Two Majors Are Missing

The New York Times is reporting that a litany of high tech and other firms filed letters with the FCC yesterday, in support of AT&T's proposed assimilation of T-Mobile, in a merger that would quickly lead to what would effectively be a mobile services duopoly in the United States.

The list of merger supporters includes Facebook, Microsoft and their minion Yahoo, Oracle, Research in Motion, and more, including a number of venture capital firms such as Sequoia Partners.

Last March, in AT&T's T-Mobile Merger Ploy: Rewarding the Worst, I explained why the proposed merger was a terrible deal for consumers.

By supporting the merger, these firms have shown their true colors as far as caring about consumers is concerned. Take note of it.

But wait a minute. Something appears amiss within the carefully coordinated confines of the "Bring back Ma Bell" merger love fest.

Standing out by their absence, neither Google nor Apple appear to have currently joined the CYA rush to sycophantically suckle the teat of the nearly omnipotent AT&T that would emerge if the merger is approved.

Google confirmed to me today that they have not taken a formal position on the merger. I was unable to immediately reach appropriate persons at Apple regarding their formal take on this matter.

Apple and Google having not currently joined the AT&T/T-Mobile merger parade is particularly fascinating when we consider Apple's close iPhone-based relationship with AT&T, and Google's Android-related association with T-Mobile.

Given that Google and Apple will have to work with AT&T in the future, I would not necessarily expect either of these firms to publicly oppose the merger per se.

But it speaks volumes that neither of these key Internet enterprises have taken public positions in favor of the merger at this time, in distinct contrast to Microsoft, Facebook, and the other entities who are enthusiastically supporting such a drastically anti-consumer coupling.

Obviously, situations and positions can change as events evolve. But from where I'm sitting right now, it seems pretty apparent who is on the side of mobile services consumers, versus who is willing to eagerly sell those consumers down the river.

The firms not joining the list of AT&T/T-mobile merger supporters appear to have taken full measure of the many ways that the merger would hurt ordinary mobile and Internet users.

The companies actively supporting the merger, as far as this issue is concerned anyway, appear to be full of something else entirely.

--Lauren--

Posted by Lauren at 09:58 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 06, 2011

Social Media + Stupid: The Weiner Warning

Greetings. Back in early 2003, I used one of the columns I was writing for Wired to discuss issues associated with pornography involving children on the Internet -- in particular the criminalization of persons who view such materials. The impetus for the essay was the then recent arrests of Peter Townshend (The Who) and Paul Reubens (Pee-wee Herman) on child pornography charges.

I assumed in advance that the column in question (the title of which was chosen by my editor, not by me) would be controversial -- and indeed it was.

Townshend's case was particularly interesting, since he insisted that he was accessing the associated sites as part of autobiographical research involving his own abuse as a child.

In any case, a focus of that column -- not in any way to defend the horror of kiddie porn -- was to explore how "ease of access" on the Internet might cause persons to actually act on impulses that they probably never would have indulged in the brick and mortar world -- not an excuse for behavior, but rather an explanation and warning.

While the situation with Rep. Anthony Weiner apparently doesn't involve children, and has many differences from the cases mentioned above, there is one key similarity. Weiner appears to have allowed himself to combine the ease of "social" contact on the Web, with what can only be called his own stupid, reckless, and self-destructive behavior.

Already, I'm hearing calls for broad controls and monitoring of social media, ostensibly triggered by Weiner's revelations. While likely to be easy fodder for some politicians, such demands must be rejected.

By any reasonable analysis of this situation, and of social media more generally, we must place responsibility squarely at the feet of the individuals who instigate activities such as Weiner's. Associated demands for social monitoring restrictions and eavesdropping would on the other hand do grave harm to privacy and free speech.

Still, as I implied in that old column, it is also important for us all to understand the practical implications in the "virtual world" of the Net, particularly as we often feel that we know people well on the Web -- when in actuality we're only communicating through a relative soda straw, compared with the vastly larger totality of actual human beings.

I'm not a psychologist, but I first noticed the phenomenon of "fantasizing" regarding remote network users via email decades ago -- long before the popularization of the term "social media" -- during the earliest years of ARPANET. Then, as now, the primary interface for communications was text, and (perhaps fortunately for us back then) the concept of sending photographs via the Net was itself largely a fantasy in the early days.

But it seemed clear even then that the Net provided an ideal "growth medium" for compartmenting our lives into "virtual" vs. "real" segments, and that any tendency toward less inhibition, more exhibitionism, rapid anger, and even depression, might be amplified and exacerbated via some Net communications.

With the rise of social media as we know it today, and with major portions of the world's population now engaging in related activities sometimes for many hours daily, it is more crucial than ever that we understand -- and indeed also help our children to understand -- the responsibilities associated with using social media, and how best to avoid making the kinds of mistakes that Weiner, and many others, have made to their detriment.

Attempts to blame such situations on the technology or openness of social media -- or of the Web in general -- are misguided, and in fact are likely to often be accompanied by ulterior motives aimed at restricting free speech and imposing government Internet controls in various guises.

To his credit, Rep. Weiner has explicitly not blamed Twitter nor the Internet for his current dilemma -- but other parties are already attempting to use his situation to their own advantage, toward their goals of imposing their own wills on the Net and its users.

Just a reminder that when it comes to the Web, our actions have consequences, and ultimately that blaming the Internet when we behave stupidly makes about as much sense as blaming the weatherman for a rainy day.

--Lauren--

Posted by Lauren at 09:21 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 05, 2011

Why Search Matters - and Fighting Internet Censorship with Technology

I've been talking about censorship and free speech quite a bit lately, especially in the context of link criminalization and government mandated censorship of search engine results (e.g. targeting Google and others) as envisioned by PROTECT IP legislation.

Censorship easily rises to the top of information-related civil rights concerns, since it impacts your ability to even realize how much information is being hidden from you in the first place. Remember "double-secret probation" from Animal House? Censorship is something like that, only with far more serious ramifications.

In response to my related comments in Blinded by the Light: The Internet Enemy Within and referenced articles, one of my regular readers took me to task for my assessment of censorship risks associated with PROTECT IP in particular.

His premise is that PROTECT IP is much like prohibitions against "yelling fire in a crowded theater" or actions taken against pornography involving children, and so is justifiable and reasonable legislation.

I'd actually been waiting for someone to bring up these examples. The former of course has been a traditional rationale for speech controls for many decades, the latter a convenient "hook" more recently for proposing all manner of restrictions on speech.

It's interesting though to note that both of these cases directly involve situations where human life and health is immediately at stake. If speech restrictions are to be tolerated under any circumstances, these are the sorts of situations that would come immediately to mind.

Yet all too often, we see that these are merely the jumping-off points for a vast array of other restrictions, built "brick by brick" through ever expanding rationales.

So we're now faced with a veritable explosion of cases where governments are attempting to impose censorship, especially on the Internet, as ever more common events are declared to be "censorship worthy" in one way or another.

In the U.S., the economic concerns of giant entertainment conglomerates would be elevated to the level of government mandated link and search engine censorship, by the PROTECT IP legislative thrust.

In Thailand, an American citizen has been arrested for a four-year-old link from his blog to a book critical of Thailand's king. In India, the government is moving to block Web sites that furnish child "sex selection" information. Both Thailand and India are rapidly moving toward expanded Internet censorship regimes, with what most observers would call "political" speech firmly in the crosshairs.

Europe may be even worse in some ways. In Spain, a push for a right to be forgotten would dictate the removal of search engine listings seemingly pretty much on demand, an inane concept that I discussed in some detail last March in Deleting History: Why Governments Demand Google Censor the Truth.

The list goes on and on.

Which brings us back to the U.S. and PROTECT IP. It's difficult see how -- if Congress succeeds in invoking search engine censorship to protect the profits of (for example) the Disney empire -- Congress could then say "no" to censorship for a broad range of other topics that most people would consider to be of more importance than money at the "Mouse House."

After all, Congress has tried in the past to impose broad Internet classification and censorship regimes on a "Think of the children!" basis before -- such as the Child Online Protection Act. With changing court compositions, there's every reason to assume that Congress will keep on trying to impose Internet speech restrictions. PROTECT IP is just the beginning.

The focus on government censorship of search engines is critical, because search engines have become the key tool for our access to -- and understanding of -- the ever growing enormity of information on the Internet.

If we can't find relevant information, if we don't even know that it exists, it might as well not exist in the first place as far as most of us would be concerned.

Back in January, author Malcolm Gladwell suggested that we really didn't need continuing improvements in search technology, seeming to imply that our current ability to access information is "good enough" for all purposes.

Nothing could be further from the truth. Current search technology is indeed very good, but the vastly expanding volume of information on the Net, in some cases coming from sources not even dreamt of ten or twenty years ago, will always be in need of new and improved ways to look at Internet data, to process and rank it, and to make it available to searchers in the most useful possible forms. The high quality of today's search tech isn't a sign of search evolution's end, it is rather a harbinger of important, additional ways of looking at knowledge, methods that are in many cases yet to be.

So search engines are central and crucial -- and that's why governments around the world are now racing to try find ways to dictate how search engines operate, with a likely result being an expanding "black hole" of topics that will ultimately be declared verboten.

Search engines such as Google must obey the law. They can fight in court against government or private demands that they view to be over broad or otherwise inappropriate, but ultimately they must operate within national government rulings if they're going to stay in any given country.

A practical alternative of course is to withdraw certain services from the countries in question -- such as occurred (to Google's credit) with Google in China. But ultimately, countries that force such a state of affairs are usually damaging their own citizens, in the purported false guise of protecting them.

There may be other approaches that search engines can employ that will mollify some government concerns short of accepting censorship edicts.

But enough writing is on the wall already that we should be actively planning for means to help assure that overreaching government censorship plans -- aimed in particular at search engines -- will not be unopposed.

As I've suggested previously, I believe we should be actively considering how best to leverage the inherently distributed, international nature of the Internet to help assure that crucial knowledge cannot be effectively buried by any national government's link censorship edicts to major search engines or other major sites.

Would it be be possible for such a distributed "knowledge backup" infrastructure to be abused? Could information location data that is genuinely harmful by most standards find its way into such distributed repositories?

Yes. But we know that crooks and other evildoers of all stripes will always find ways to access "forbidden" information in one way or another.

PROTECT IP and its international ilk ultimately do not threaten the real bad guys, but rather set the stage for much broader restrictions on the knowledge accessible to law-abiding citizens.

It is access to information by honest persons -- who make up the vast majority of the world's population-- that is most at risk of being targeted by censorship in the long run -- collaterally at first, but later much more directly.

Government ordered censorship of links and search engines is the "hydrogen bomb" of government control over the Internet, and so of information and knowledge in general.

But unlike real nuclear weapons, the deployment of censorship as anti-knowledge armament will occur little by little -- first visible only as relatively small puffs of smoke, that only later will combine into enormous and encompassing mushroom clouds of control.

"Censor to protect the children."

"Censor to protect the profits."

"Censor to avoid criticism."

"Censor to bury whistleblowers."

"Censor to preserve the status quo."

"Censor to honor our glorious leaders."

"Censor to protect our Fatherland."

"Censor to honor Caesar."

"You can cage the singer, but not the song."
       -- Harry Belafonte (1988)

--Lauren--

Posted by Lauren at 06:51 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


June 01, 2011

Weiner, Whiners, and Social: Public Means Public!

I had hoped that it would not be necessary to spend time discussing the Twitter controversy of a photo that (might be) of a Congressman's underwear-laden crotch.

However, people keep asking me about this, and about another story in the news right now, regarding a PhD student at the University of Amsterdam, whose nose is apparently all out of joint over his ability to create a database of (we're told) 35 million public Google Profiles.

It's difficult for me to imagine a topic in which I'd have less inherent interest than Rep. Weiner's photography/Twitter habits in any likely context. And the collecting of already public Google Profiles is equally yawn inducing from my standpoint.

Yet there is an underlying theme. Both of these stories have been sucking considerable oxygen out of the current news cycles, and also remind me of the usually rather nutty and misguided protests regarding Google Street View, about which I've written many times (I won't clog up the text here with associated links today, you know how to find them).

Fundamentally, there is a complex dynamic between "privacy rights" as most broadly defined, and other key elements of civil rights, including free speech.

I am of course a strong supporter of communications free from government or other eavesdropping. I believe that if persons wish to keep the private details of their lives away from public view, in most cases that's completely reasonable.

Intellectually though, and at the gut level as well, I find it quite distressing to see attempts to use unrealistic and often purposely distorted "privacy concerns" as excuses to unreasonably muzzle free speech and related activities.

We see people bemoaning the fact that it's possible to analyze public Twitter feeds retrospectively. The ability to collect public Google profiles -- created voluntarily by Google users and with all information contained therein under their personal control -- is suddenly a global story. The taking of photos from public thoroughfares, of the same imagery that anyone can see from those same vantage points, triggers bizarre protests of indignation.

And I might add to this list, the collection of geolocation data from open, unencrypted Wi-Fi access points, that is irrationally treated in some quarters as a civil or even criminal offense.

All of these are examples of ersatz -- that is, essentially false -- privacy issues. Some persons raise them out of genuine though in most cases misguided concerns, other times they are invoked disingenuously for political or commercial advantage.

Again, this is not to cast aspersions of any kind on genuine privacy matters, and the protection of genuinely private data, categories of great interest to me for decades. Nor am I arguing that there shouldn't be intense deliberations over if and how various data should be made public en masse in the first place, especially with a proactive view toward whether such data, once easily available, might be abused.

But let's get real. Sounding alarms over people or firms gathering and using data that is already out there -- easily accessible and easily viewable -- is nonsense.

In fact, creating artificial rules trying to restrict the gathering or use of such data after the fact of its large scale availability can be extremely counterproductive and do serious damage. Such concocted, mock "restrictions" can easily give people a false sense of confidence that their already public data is somehow going to be "protected" by such rules, shifting responsibility from the making of reasoned decisions about what data they really wished to make public in the first place.

With so many critical, genuine privacy issues in the forefront today, on both the domestic and international stages, we shouldn't be wasting any of our time or energies on attempts to create sham privacy controversies relating to already public information.

Otherwise, we may find ourselves inadvertently allowing the enemies of freedom to distort and mutate genuine concerns about privacy into twisted and perverted doppelgangers to be used as powerful weapons against free speech itself.

It would be ironic indeed if we permit as important an area as privacy policy to be hijacked to the detriment of fundamental civil rights.

But it could happen -- something to discuss with your family, friends, and colleagues perhaps, that is ... while you still can.

--Lauren--

Posted by Lauren at 09:14 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein