May 30, 2014

EU's "Right to Have The Streisand Effect" Goes Live

Since I've at various times over the years expressed both my concerns and disgust for the "right to be forgotten" concept, e.g. "The "Right to Be Forgotten": A Threat We Dare Not Forget, I'm not going to rehash that discussion here and now. But a look at the ironic situation the EU censorship bureaucrats have created for themselves today, via the recent EU court ruling on this matter, is both amusing and instructive.

Google now has an "application" form up for EU residents who want to apply for search results removal. Using this form definitely does not guarantee that results will be removed, particularly if there is any public interest in those results.

But here's the best part. Results will only be removed for the EU country localized versions of Google. They will *not* (naturally, since thankfully the EU doesn't rule the world!) be removed from the main google.com site itself.

Additionally, when results are removed from EU versions, the associated results pages will reportedly contain a notice to EU users that results were deleted (similar to the way copyright takedowns are handled now), and "Chilling Effects"-type reports will also reportedly be made.

The implications of this gladden my "right to be forgotten" hating heart. If you're an EU user searching for Joe Blow, and the EU has forced removal of a search result related to him on, say, google.fr, the warning notice informing you that results have been removed for that search give you an immediate cue that you might want to head over to google.com to see what the EU censorship bureaucrats deemed unfit for your eyes. In essence, it's a built in Streisand Effect, courtesy of the EU itself! Before this, you might not even have noticed the result in question among other results for that search .

Not only that, but other search queries that happen to include the pages that were blocked for EU searches on that name will still apparently appear, even in the EU.

And of course, curious EU searchers who want to escape the local EU censorship regimes have various ways to reach the main google.com, as do other users in censoring countries around the world: google.com homepage access links, use of google.com/ncr (No Country Redirect), or in more extreme cases proxies and VPNs.

Censorship in the Internet age is a hopeless endeavor, as the EU is about to discover.

Get your popcorn ready.

Be seeing you.

--Lauren--
(I'm a consultant to Google. I'm speaking for myself, not for them.)

Posted by Lauren at 10:12 AM | Permalink


February 22, 2014

No, I Don't Trust You! -- One of the Most Alarming Internet Proposals I've Ever Seen

If you care about Internet security, especially what we call "end-to-end" security free from easy snooping by ISPs, carriers, or other intermediaries, heads up! You'll want to pay attention to this.

You'd think that with so many concerns these days about whether the likes of AT&T, Verizon, and other telecom companies can be trusted not to turn our data over to third parties whom we haven't authorized, that a plan to formalize a mechanism for ISP and other "man-in-the-middle" snooping would be laughed off the Net.

But apparently the authors of IETF (Internet Engineering Task Force) Internet-Draft "Explicit Trusted Proxy in HTTP/2.0" (14 Feb 2014) haven't gotten the message.

What they propose for the new HTTP/2.0 protocol is nothing short of officially sanctioned snooping.

Of course, they don't phrase it exactly that way.

You see, one of the "problems" with SSL/TLS connections (e.g. https:) -- from the standpoint of the dominant carriers anyway -- is that the connections are, well, fairly secure from snooping in transit (assuming your implementation is correct ... right?)

But some carriers would really like to be able to see that data in the clear -- unencrypted. This would allow them to do fancy caching (essentially, saving copies of data at intermediate points) and introduce other "efficiencies" that they can't do when your data is encrypted from your client to the desired servers (or from servers to client).

When data is unencrypted, "proxy servers" are a routine mechanism for caching and passing on such data. But conventional proxy servers won't work with data that has been encrypted end-to-end, say with SSL.

So this dandy proposal offers a dandy solution: "Trusted proxies" -- or, to be more straightforward in the terminology, "man-in-the-middle attack" proxies. Oh what fun.

The technical details get very complicated very quickly, but what it all amounts to is simple enough. The proposal expects Internet users to provide "informed consent" that they "trust" intermediate sites (e.g. Verizon, AT&T, etc.) to decode their encrypted data, process it in some manner for "presumably" innocent purposes, re-encrypt it, then pass the re-encrypted data along to its original destination.

Chomping at the bit to sign up for this baby? No? Good for you!

Ironically, in the early days of cell phone data, when full capability mobile browsers weren't yet available, it was common practice to "proxy" so-called "secure" connections in this manner. A great deal of effort went into closing this security hole by enabling true end-to-end mobile crypto.

Now it appears to be full steam ahead back to even worse bad old days!

Of course, the authors of this proposal are not oblivious to the fact that there might be a bit of resistance to this "Trust us" concept. So, for example, the proposal includes the assumption of mechanisms for users to opt-in or opt-out of these "trusted proxy" schemes.

But it's easy to be extremely dubious about what this would mean in the real world. Can we really be assured that a carrier going through all the trouble of setting up these proxies would always be willing to serve users who refuse to agree to the proxies being used, and allow those users to completely bypass the proxies? Count me as skeptical.

And the assumption that users can even be expected to make truly informed decisions about this seems highly problematic from the git-go. We might be forgiven for suspecting that the carriers are banking on the vast majority of users simply accepting the "Trust us -- we're your friendly man-in-the-middle" default, and not even thinking about the reality that their data is being decrypted in transit by third parties.

In fact, the fallacies deeply entrenched in this proposal are encapsulated within a paragraph tucked in near the draft's end:

"Users should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc. Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information."

Who are they kidding? It's been a long enough slog just to get to the point where significant numbers of users check for basic SSL status before conducting sensitive transactions. Now they're supposed to become security/certificate experts as well?

Insanity.

I'm sorry gang, no matter how much lipstick you smear on this particular pig -- it's still a pig.

The concept of "trusted proxies" as proposed is inherently untrustworthy, especially in this post-Snowden era.

And that's a fact that you really can trust.

--Lauren--
I'm a consultant to Google. My postings are speaking only for myself, not for them.

- - -

Addendum (24 February 2014): Since the posting of the text above, I've seen some commentary (in at least one case seemingly "angry" commentary!) suggesting that I was claiming the ability of ISPs to "crack" the security of existing SSL connections for the "Trusted Proxies" under discussion. That was not my assertion.

I didn't try to get into technical details, but obviously we're assuming that your typical ISP doesn't have the will or ability to interfere in such a manner with properly implemented traditional SSL. That's still a significant task even for the powerful intelligence agencies around the world (we believe at the moment, anyway).

But what the proposal does push is the concept of a kind of half-baked "fake" security that would be to the benefit of dominant ISPs and carriers but not to most users -- and there's nothing more dangerous in this context than thinking you're end-to-end secure when you're really not.

In essence it's a kind of sucker bait. Average users could easily believe they were "kinda sorta" doing traditional SSL but they really wouldn't be, 'cause the ISP would have access to their unencrypted data in the clear. And as the proposal itself suggests, it would take significant knowledge for users to understand the ramifications of this -- and most users won't have that knowledge.

It's a confusing and confounding concept -- and an unwise proposal -- that would be nothing but trouble for the Internet community and should be rejected.

- - -

Posted by Lauren at 08:24 PM | Permalink


January 16, 2014

Warning: Network Solutions' Moronic Alert Email That Masquerades as a Phishing Attack

Normally, the less said about domain registrar Network Solutions, (NSI) the better. But events this morning seem worthy of particular mention.

Within my inbox were two messages purporting to be from Network Solutions, one after the other. They were identical except for coded differences in one of the embedded URLs. They demanded that I simply "click here" to confirm my WHOIS information due to "New Regulations" -- they warned that if I didn't comply, I'd still own my domains, but my websites would stop working.

These messages had a variety of the hallmarks of malware attack phishing .They contained an ominous warning. They demanded a click. They contained no references to my actual NSI accounts or domains. They had odd capitalization. And they appeared to have been worded by an underachieving sixth grader.

Normally I would have simply deleted these apparent jokers without much thought. But I didn't this time for one reason -- just a few days ago, I had undergone the tortuous process to unlock two of my last domains still with NSI, in preparation for moving them to a sane registrar. The timing was suspicious.

So I investigated these messages in more depth. And remarkably, I determined that they were seemingly legit.

A quick Google Search revealed extremely scarce discussion of key strings from the emails. That can be interpreted as either good news or bad news, depending on your point of view. But this did lead me to an apparent NSI Facebook page where someone was currently asking about this, and a curt reply from NSI saying that the alerts were real.

The key reply URL in the emails (at least at first glance) pointed to:

whoisaccuracy-portal.networksolutions.com

followed by a long coded string that varied with each email. Typing this in manually led me to a register.com page that simply complained of invalid input (keep in mind that NSI, register.com, and rcom.com are the same domain entities). Alexa also seemed to suggest that the URL was legitimate, though receiving a miniscule percentage of NSI-related hits.

Inspection of message headers, particular the key top MTA ingress header, showed that the message did indeed gateway to my servers from register.com.

Given all this, I decided to click the links from a reasonably isolated system. Each time, the register.com page simply noted that my email address had been verified.

It is my supposition at this point that these two emails were probably part of a WHOIS accuracy statistical sampling survey or something similar, likely triggered by my actions to move two domains away from NSI.

And it is my considered opinion that the implementation of this process qualifies as idiotic and borderline criminal in terms of gross incompetency.

But then again, we're talking about Network Solutions.

So while we've now been warned, we shouldn't be at all surprised.

- - -

UPDATE: Within a few minutes of my sending a tweet with a link to this blog posting, I received this tweet back from NSI:

"Thx for your fdbk, Lauren! The email format has changed, but requirements are still the same."

-- and referencing a 2010 NSI blog posting about ICANN requirements. I've had domains since 1986, and I've never received a message like these before. I find it utterly bizarre that apparently after at least three years NSI is now (still?) using such an inexcusably inept and dangerous format for these notifications! C'mon guys, get with the program!

- - -
--Lauren--
Disclaimer: I'm a consultant to Google. My postings are speaking only for myself, not for them.

Posted by Lauren at 10:09 AM | Permalink


December 29, 2013

Unintended Consequences: How NSA Revelations May Lead to Even More Surveillance

It’s oh so traditional to make end of year predictions, and never let it be said that I don’t have at least some respect for some traditions, and least some of the time. And if there’s any topic in the spotlight for predictions at this juncture, it’s gotta be where the continuing bouncing bounty of leaked NSA documents is leading us.

This is a controversial topic, to be sure. When I recently mentioned my plans for this essay to a prominent Internet activist who has been quite vocal about these issues, they urged me not to make these predictions at all -- suggesting that they wouldn’t be helpful.

But I’m very much a member of the "actions have consequences" school of analysis, and I strongly feel that we need to be looking beyond the headlines, tweets, and clicks, to what the likely real world results from this maelstrom might actually be.

Before we gaze into the somewhat cloudy crystal ball or stir the pungent tea leaves, a few preliminary stipulations seem in order.

First, this is a discussion of what I feel are strong probabilities of what is likely to happen -- not that they are certain to occur, of course. And the fact of these predictions doesn’t mean that you -- or I -- are going to be happy about these outcomes if they should actually occur. I know I won’t take any joy from them at all.

Of course it’s impossible to proceed without at least mentioning whistleblower/leaker (pick one or both) Edward Snowden, though I agree with those who note that this story of global surveillance shouldn’t be about him. Personally, I see no reason to believe that he had anything but good intentions by his own reckoning, though his modus operandi, combined with a significant degree of likely naivete, have led both he and the rest of us off in directions that he perhaps did not and has not fully anticipated. Time will tell.

Ironically for longtime observers of NSA and other intelligence agencies, and those of us who warned early about the abuses being ensconced in the PATRIOT and Homeland Security Acts -- and were accused of being unpatriotic in return -- scarcely little in the "revelations" to date are a real surprise at all. Nor are reports of intelligence agencies weakening encryption systems anything new -- concerns about NSA influence over the Data Encryption Standard (DES), reach back about four decades.

Perhaps the biggest genuine surprise has been NSA’s shoddy security practices. But we can be sure that NSA and other agencies around the world are hard at work to try make sure there won’t be any more Snowdens. (Sidenote: An interesting question is whether or not there already have been the equivalent of Snowden in the scope of repressive, censoring and brutal domestic intelligence regimes such as those operated by Russia and China. One suspects that if such a person were discovered in such countries, they’d be simply marched out, summarily shot through the head, and we’d never hear about them at all -- conveniently avoiding bad publicity of the sort now drowning NSA.)

Nor will I here address in detail the rising "witch hunt" atmosphere accusing both firms and individuals of complicity in NSA operations -- and demanding a range of immediate penalties -- while simultaneously refusing to accept the proposition that the accused (guilty or innocent) deserve due process and a chance to defend themselves -- whether or not such opportunities are legally mandated in any given case. "Guilt by association" and demanding "proof of negatives" are the practices of the dark side, not of enlightened critics of surveillance abuses.

Finally, there’s the elephant in the room. Everything we’re discussing, the millions of words and heartfelt arguments about surveillance and civil liberties, are likely to be entirely academic in the event of a significant new terrorist attack on U.S. soil. Even a "small" nuke or dirty bomb in a city center, even if relatively few people were killed and little significant damage done, would almost certainly create a headlong rush by politicians to flush our remaining civil liberties down the toilet so fast that we’d (to borrow a recurring sci-fi meme) soon be standing in line to be fitted with remote controlled, steel explosive pain collars.

- - -

When we look at the likely results from the controversies surrounding NSA and other intelligence agencies (beyond the economic benefits to the media sites who have been doling out various associated documents bit by bit for highest drama and maximal clicks), we can immediately divide the analysis into the two categories of foreign and domestic intelligence.

The analysis for the former -- foreign intelligence -- is remarkably simple. For all the handwringing and politically dissembling spin, don’t expect any significant changes in the foreign intelligence realm anywhere in the world as a result of these controversies.

The reason is clear. Foreign surveillance ops -- conducted by essentially every country with the means and opportunity to do so -- are pervasive, and despite Snowden, still largely hidden from view. Since there are no effective international laws addressing this area (nor is it clear how there ever could be for secret programs!), there is simply no mechanism or path for significant reforms, whether visible or invisible, real or faked, truth or lies.

Foreign intelligence reaches back to the dawn of civilization, conducted globally everywhere, and long predates technologies like the Internet, telephone, and telegraph. The ancient Egyptians, Romans, and Greeks were masters of the art. No doubt it was well developed long before then, as clusters of early humans were concerned about what enemy (and ostensibly friendly) other clusters were up to.

Even more to the point, no countries will be amenable to unilaterally withdrawing in this sphere -- the perceived risks (both real and political) are simply too great. And it’s almost impossible to postulate some sort of global multilateral agreement on reducing surveillance that could actually be proven and verified, pretty much by definition when it comes to secret programs.

What this ends up meaning is that in an international context especially, you really do want to encrypt your data links with the best encryption you can obtain or develop, just on general principles if nothing else. The goal here is to limit the scope of opportunistic, mass surveillance, not highly targeted surveillance. In practice, there are almost always ways to surveil specific targets, even if it involves a "black bag" job to install goodies on a target’s computer. Communications endpoints are especially vulnerable. Nor would it be prudent even to try stop all targeted surveillance. The sad fact of the world today is that there are genuinely evil people who specifically and deliberately want to kill civilians on a mass scale, and targeted surveillance can (and does) play an important role in stopping them.

But pervasive encryption can make mass surveillance -- which will virtually always mostly involve the communications of innocent parties -- so time consuming and expensive as to significantly limit its utility and practicability, and it’s indeed mass surveillance where the most potential for abuses should indeed concern us.

- - -

It’s in the scope of domestic intelligence that we can see the most likelihood of change. Unfortunately, much smart money is now going on the bet that in the long run the result of all these revelations will actually be more domestic surveillance (under various changing names and labels) not less!

How could this be? How could this happen?

There are various clues from around the world.

For example, just weeks ago, and shortly after a high level French ex-intelligence official was quoted as saying essentially that "we don’t resent NSA, we simply envy them!" France passed legislation legalizing a vast range of repressive domestic surveillance practices.

News stories immediately proclaimed this to be an enormous expansion of French spying. But observers in the know noted that in reality this kind of surveillance had been going on by the French government for a very long time -- the new legislation simply made it explicitly legal.

And therein is the key. Counterintuitively perhaps, once these programs are made visible they become vastly easier to expand under one justification or another, because you no longer have to worry so much about the very existence of the programs being exposed.

Here in the U.S., it’s the NSA telephone "metadata" program that has received the most attention in the domestic context. And there’s yet another irony here -- this is the very same data that telephone companies have traditionally collected of their own volition since the dawn of itemized call billing. And while retention periods have varied widely (more on that in a bit) that data has long been considered to be the property of the telcos open for their commercial exploitation in various ways (at least until relatively recently, in some cases even available to third parties for marketing purposes).

The NSA metadata program has now been gathering conflicting court decisions, declaring it both legal and illegal, both an abomination and absolutely crucial. This strongly suggests that the Supreme Court will need to take on this issue.

But the landscape of the program is likely to change drastically before any such decision, and those persons placing their bets on the Supremes to strike down the program might be in for a disappointment. The court traditionally shows great deference to the executive branch on national security matters. Nor is the court likely to be enthusiastic at the prospect of being lambasted if they kill the program and then a subsequent terrorist attack is (rightly or wrongly) blamed on the absence of the program itself.

However, the justices stand a pretty good chance of not even having to deal with the program in its current form, because something actually worse, and even easier for them to justify, appears to be rolling into view as the tea leaves align.

The NSA metadata program has become the proverbial hot potato. And like a hot potato, it’s unlikely to simply vanish. Rather, somebody is going to end up holding the smoldering spud.

Even before the recent NSA Commission report made its recommendations, it seemed clear that administration sentiment had shifted toward making this metadata the responsibility of the telephone and cable companies -- AT&T, Verizon, Comcast, Charter, Time Warner Cable and so on. The commission in fact also specifically recommended this -- or the use of some other "third party" organization for the purpose.

Notably, none of the major stakeholders seem to be seriously talking about no longer collecting the data at all.

This actually should not be surprising. As mentioned above, this is exactly the sort of data that has long been collected commercially anyway. And a key justification for the NSA program -- echoed by that very recent court decision -- is that (supposedly) we don’t have an expectation of privacy for our call metadata being held in such commercial third party contexts.

So, the handwriting appears increasingly clear. Pressure will rise to move the responsibility for holding this data corpus from NSA per se, back to the carriers or perhaps some ersatz independent org, but the data will still be collected. And despite calls for more limited access by NSA and other agencies , one can safely assume that whatever access they say they really, truly need for national security, they’re going to get -- one way or another. There’s simply no obvious way that there will be a real return to any actual, meaningful, truly individualized search warrant requirement (no matter how any changes are ostensibly framed to the public).

It’s this focus on "privatizing" this kind of government mandated data collection that is of especial concern.

Because while the data retention policies of Big Telecom vary widely today both by company and across a range of services (telephone and text message metadata, text message content, and so on), we can bet our bottom dollars that any move toward privatization will come complete with mandated retention periods that in many cases will exceed the time that the data is retained today.

Even more importantly, these telecom companies will almost certainly be prohibited from deciding to hold the data for shorter periods, but likely will be permitted to hold it longer if they choose, still available pretty much on demand to the government.

The truth is that this sort of government mandated telecom data retention regime has long been the wet dream of government agencies in the U.S. and around the world -- a major push in this direction has been taking place in the EU for quite some time (despite the dissembling by Europe’s leaders regarding surveillance -- the hypocrisy is palpable).

It is also not surprising that the thought of Big Telecom having control over even more of our data sends a cold chill down many observers’ spines. You’ll recall these are the same firms arguing that they have a first amendment right to exploit, control, filter, and limit Internet data as they see fit (and may shortly have this anti-net-neutrality view confirmed by an upcoming court decision).

And unlike government agencies, which at least in theory are subject to significant regulation, Big Telecom has so far been pretty successful in arguing (in the face of a weak FCC) that they are the lords and masters of Internet access, beyond the reach of most meaningful regulations.

I don’t know about you, but personally, I’ve never had any negative dealings with NSA. But I’ve been screwed over by AT&T and Verizon numerous times, as have millions of other customers and vast numbers of municipalities who have been subject to these firms’ manipulations and outright lies. To put it bluntly, and as painful as this is to say, many observers trust AT&T and Verizon far less even than NSA, and consider Big Telecom being the custodian of our data as an even more nightmarish outcome than the data being under government control, at least potentially more subject to oversight.

Of course, the best of all worlds would be not holding onto telco metadata in the first place. But if you really think that’s going to happen, I’d like to talk to you about the potential purchase of a New York City bridge spanning the East River.

So please excuse me if I can’t work up any enthusiasm for those firms or some "new third party" simply providing a new bucket into which the metadata will pour in droves.

But it gets worse.

Once these visible government mandated data retention programs are in place, the urge to expand them will be nearly irresistible.

Already, a prominent member of the NSA Commission has publicly suggested that such retention should expand to include email -- another item long on the various agencies’ wish lists around the world (again including in the EU).

And if Big Telecom goes along (whether enthusiastically or not, voluntarily or not), pressure for expanding government-ordered data retention mandates into other sectors and players also seems very likely in the long run.

- - -

This then may be the ultimate irony in this surveillance saga. Despite the current flood of protests, recriminations, and embarrassments -- and even a bit of legal jeopardy -- intelligence services around the world (including especially NSA) may come to find that Edward Snowden’s actions, by pushing into the sunlight the programs whose very existence had long been dim, dark, or denied -- may turn out over time to be the greatest boost to domestic surveillance since the invention of the transistor.

By creating pressures for a publicly acknowledged, commercially operated, "privatized" but government mandated data collection and retention regime, the ease with which new categories of long-sought data could be added to this realm -- especially in the wake of a terrorist attack that could be used as an ostensible justification -- seems significant to say the least.

Without having to worry so much about surreptitious programs being discovered, the government can concentrate on making its public case for the mandated retention of ever more forms of data -- which is already typically being collected in the course of business -- while vastly reducing or eliminating firms’ flexibility to delete and destroy such data on a more rapid and privacy-friendly schedule.

This would be a true privacy tragedy.

As I noted at the start, this outcome is not necessarily already burned into the timeline. But listen closely and read between the lines of statements by the NSA Commission, politicians, and the surveillance spooks themselves -- the foundations for this outcome are already being laid.

At least from the standpoint of the global surveillance community, being able to claim privacy-friendly reforms while actually expanding surveillance in the open under other labels would be a holy grail of 21st century spying.

The way matters appear to stand right now, it would likely be extremely unwise to discount the probabilities of this actually occurring in some form.

All the best to you and yours for 2014!

Be seeing you.

--Lauren--
Disclaimer: I'm a consultant to Google. My postings are speaking only for myself, not for them.

Posted by Lauren at 02:06 PM | Permalink


December 14, 2013

Twitter’s Rapid Reversal -- and the Rising Dilemma of “Public” vs. “Publicized”

By now you probably are well aware of what must have been one of the fastest major policy reversals in the history of social networking, when a few evenings ago, Twitter changed their definition of user blocking, and in the face of an enormous outcry that became an international news story within a matter of hours, announced that they were for now reverting back to their original methodology.

There are a couple of rather obvious lessons here, and at least one not so obvious lesson that is likely at least as important.

First, at Internet speeds, it's possible to lose years worth of user love in a heartbeat, if you're seen as suddenly changing the rules in a manner that your customers (users are customers, right?) feel in the large to be antithetical to their interests.

Secondly, when you screw up, don't prolong the agony -- get in front of the issue as fast as possible. When a mea culpa is in order, get the sword out, make your proclamation, and correct the situation as quickly as you can. Twitter's rapid response to a sudden crisis (albeit of their own making) was indeed both wise and prudent.

But this leaves us with a gaping question -- how did Twitter so grossly miscalculate the likely reactions to their policy change, and so massively underestimate their impact?

I suspect that part of the answer involves understanding and appreciating the issues of "public" vs. "publicized" (allow me to coin the term "PvP" for short) in social networks -- a category of concerns only now really coming into focus and discourse.'

"Public is public" -- you've likely seen me say this many times. It is foolhardy to assume that a public posting will be seen only in the context in which it was originally made, or to pretend that a public statement can somehow be effectively erased after the fact. The so-called "right to be forgotten" -- that suggests trying to censor information that has already been extant on the Web, either from websites, search engines, or both, is entirely impractical and potentially vastly dangerous to fundamental free speech rights and more. Such anti-speech laws must be vigorously opposed.

Yet there is a difference between sending a posting out to the members of a mailing list, or your followers on a social network -- vs. having the same message blaring out of loudspeakers for all to hear on every street corner of the planet.

The difference relates fundamentally to "discoverability" -- how likely it is that any given posting will be seen or found beyond the context in which it was originally made. In other words, simply because a posting is publicly available to find in a search engine and to read via a public URL, doesn't equate to purposely "publicizing" that material beyond its original posting context.

There are many facets to this dilemma, and various aspects of them apply to most social networks today, including Twitter, Facebook, Google+, and others.

The aspect perhaps most relevant to the recent Twitter reversal relates to an increasingly popular concept in social networking, amounting to the idea that it's acceptable -- even desirable -- to present different logged-in users, and non-logged-in observers, completely disparate views of the same underlying discussion threads, streams, or user status states.

So, for example, logged-in user A may see a different "public" stream of discussion than that observed by logged-in user B, and non-logged-in C may see something different in other ways -- in some cases actually even more complete than what A or B might be seeing, depending on possible user blocking relationships between B and A.

This takes us to the heart of the Twitter controversy. Under their now aborted changes, blocking a user would have only prevented the blocking party from seeing what the blocked party was saying -- the latter would be able to continue publicly harassing the blocker -- and having such remarks seen by everyone who would have seen them previously -- except the party who triggered the ersatz block itself. This situation was quite reasonably described by critics as being roughly equivalent to being offered a blindfold and earplugs to deal with someone covering your home with graffiti, or, to characterize it a different way, "Lie back and think of England [go ahead, look it up!] ...

Supporters of Twitter's changes argued that blocking a user never really stopped them from seeing your public postings anyway, and that not informing someone that they were blocked would help prevent possible reprisals from the blocked party (on the theory that they wouldn't catch on to the fact that they were blocked).

But most any battered woman is likely to tell you that assuming harassers are stupid, or that ignoring them is a solution -- is utterly and perhaps lethally wrong.

It is true however that those public postings would still be visible if you went out looking for them. But the inability to directly associate with them, within the context of the specific streams and threads themselves, is still highly significant once blocking has been enabled.

There's a slippery slope aspect to all this as well in the broader social networking context.

Once you accept the proposition that it's OK to not inform someone that they've been blocked, and to present them with a version of a stream or thread that is actually more limited than that seen by other users or even the public, it becomes much more acceptable to spread this mindset into other areas.

User A may not realize that the comments they posted on page B are only visible to A, and not to anyone else (unless A logs out and inspects the page from that vantage point) -- or that comments written by A and queued for moderation, or rejected by moderators, may still appear to logged-in A as if they are publicly viewable, even though they are not yet (or never may be).

The practicality of such approaches when attempting to manage large social networks seem clear enough, but the resulting dilemmas are arguably ethically dubious at best.

These conflicts become even more noteworthy as the concept of "public" spreads into third-party contexts beyond the scope of original postings, a situation I alluded to above.

When a posting made publicly to a set of followers becomes routinely visible and highlighted to affinity and interest groups who are not largely congruent with that original posting audience, the impact of the posting itself can change in a fundamentally qualitative manner, both unexpected and unwelcome from the standpoint of the posting party.

In essence, the public posting has now been publicized in a place and manner not in accordance with the poster's original intentions.

By analogy, if you went looking for a job by posting your information publicly on a tech job website, you probably wouldn't want that information crossposted to a publicly available sex magazine (or perhaps you would, but the point is that you likely don't want the job website to perform that crossposting without your explicit permission.)

As you can see, the entire "public vs. publicized" arena is nontrivial to grasp in its scope, and I can really only scrape the surface here today.

But the next time you hear discussions about public information on the Web, particularly in relation to social networks, the next time you hear someone say "public is public" (including me!), I urge you to consider the fascinatingly complex maze of twisty passages that reside between public and publicized, and how best we may find our way through them without the use of magic wands or magic words [insert "XYZZY" joke here? Naw ...]

Be seeing you.

--Lauren--
Disclaimer: I'm a consultant to Google. My postings are speaking only for myself, not for them.

Posted by Lauren at 11:45 AM | Permalink



     Privacy Policy