October 08, 2015

Research Request: Seeking Facebook or Other "Real Name" Identity Policy Abuse Stories

Facebook and potentially other social media "real name" identity policies, such as discussed in:


continue to raise the specter of individuals being targeted and harmed as a result of these policies -- not just online but physically as well -- especially persons who are already vulnerable in various ways.

While we know of some specific examples where those so affected have been in a position to speak out about these abuses, it seems certain that vastly more individuals have been similarly negatively impacted, but have been understandably unable or unwilling to discuss their situations publicly.

I am attempting to gather as much information as possible about the scope of these issues as they have affected these individuals.

If you feel that you have personally been abused by Facebook or other Internet systems with "real name" identity requirements, I'd greatly appreciate your telling me as much about your situation as you feel comfortable doing. If you know of other persons so affected, please pass this request on to them if you feel that doing so would be appropriate.

Regardless of whether you identify yourself to me or not, the details of what you tell me will remain completely confidential unless you specifically indicate differently, and I will otherwise only use this data to develop aggregate statistics for summary public analysis and reports.

I would appreciate anything relevant to these issues that you can share with me via email at:


Thank you very much. Take care, all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:10 PM | Permalink

Why Facebook's Dangerous "Real Names" Policy Is Like the NRA and Guns

This posting isn't about the monsters of the NRA, their minions, and their blood-soaked hands. But it is about facing reality, not ignoring data, and about harm caused to real people, especially those who are already marginalized in our societies. As we'll see, in these respects there is a disquieting parallel between Facebook and the National Rifle Association, which once glimpsed can be very difficult to put out of one's mind.

I've talked many times before about the dilemmas associated with social media "real name" identity regimes, which attempt to require that users be identified via their actual "real world" names rather than by nicknames, pseudonyms, or in various anonymous or pseudo-anonymous forms.

At present, Facebook is the globe's primary enforcer of a social media "real names" ecosystem. And despite a mountain of evidence that this policy does immense harm to individuals, they have held steadfastly to this model. Google+ initially launched with a real names policy as well, but one of Google's strengths is realizing when something isn't working and then adjusting course as indicated -- and Google+ no longer requires real names.

Facebook's intransigence though is reminiscent of -- oh, for example -- being faced with overwhelming evidence that as gun availability increases gun violence increases, and then proposing even more guns as a solution to gun violence.

Facebook can claim that real names don't hurt people, and the NRA can claim that more guns are safer than less guns, but only sycophants will buy such bull in either case.

The original ostensible justifications for real names requirements have been pretty well shredded into tatters over the last few years.

It seemed pretty clear that Facebook has hoped all along to leverage a "one name, real identity" model into Facebook becoming a kind of universal identity hub, that users would broadly employ both online and in many cases offline as well. Facebook's founder and CEO Mark Zuckerberg famously said, "Having two identities for yourself is an example of a lack of integrity." This view is a necessary component of Facebook's ongoing hopes for real name monetization across the board.

Facebook's "universal identity" model thankfully hasn't really panned out for them so far, but they certainly moved to try push their real names methodology into other spheres nonetheless.

One obvious example is the Facebook commenting system, widely used on third-party sites and requiring users to login with their Facebook (real name) identities to post comments. A supposed rationale for this requirement was to reduce comment trolling and other comment abuse.

However, it quickly became clear that Facebook "real name" comments are a lose-lose proposition for everyone but Facebook.

There's no evidence that forcing people to post comments using their real identities reduces comment abuse at all. In fact, many trolls revel in the "honor" of their abusive trash being so identified.

Meanwhile, thoughtful users in sensitive situations have been unable to post what could have been useful and informative comments since Facebook's system insists on linking their work and personal postings to the same publicly viewable identity, making it problematic to comment negatively about an employer, or to admit that your child has HIV -- or that you live a frequently stigmatized lifestyle, for example. In some cases potentially life-threatening repercussions abound.

On top of all that, failures of these real name commenting systems give major third-party firms a convenient excuse to terminate existing comments completely across their sites, rather than making the effort to moderate comments effectively.

And much like the NRA's data-ignoring propaganda, the deeper you go with Facebook the more ludicrous everything gets.

Facebook's system for users to report other users for suspected "identity violations" would seem not particularly out of place in old East Germany under the Stasi - "Show us your papers!"

Users target other users with falsified account identity violation claims, causing accounts to be closed until the targeted, innocent users can jump through hoops to prove themselves "pure" again to Facebook's identity gods. Many such impacted users are emotionally wrecked by this kind of completely unnecessary and unjustifiable abuse.

There are other related issues as well. In a new public letter, a large consortium of public interest groups are asking Facebook to change or ideally end their real names policies, and have suggested that in some parts of the world such policies may actually be illegal.

Yet this really isn't all about Facebook, even though Facebook is unarguably the "800-pound gorilla" in the online identity room.

In a world where (for better or worse) our Internet access and content increasingly funnels through a relative handful of large firms, and governments around the world are rapidly embracing censorship, it's more important than ever that individuals not be stuffed into "one size fits all" identity regimes.

We must not permit online anonymity and pseudo-anonymity -- both crucial aspects of legitimate free speech -- to become effectively banned or criminalized.

Mistakes made in these policy realms today could significantly and perhaps permanently ruin key aspects of the Internet going forward, and these are matters that must be dealt with logically and based on data, not emotions.

To do otherwise is basically like playing Russian roulette with the potentially unlimited wonders of the Net itself. And while that might enrich the gun merchants who don't care whose brains the bullets splatter, for the rest of us it would be a very sad outcome indeed.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 10:17 AM | Permalink

October 07, 2015

Europe's Big, Big, Big Lie About Data Privacy

By now you may have heard about a European court's new decision against the so-called data "Safe Harbour" (over here we'd spell it "Safe Harbor") framework, involving where various Internet data for various users is physically stored.

You can easily search for the details that explain what this could effect, what it potentially means technically and legally, and generally why this dangerous decision is a matter of so much concern in so many quarters.

But here today I'm going to concentrate on what most of those articles you'll find won't be talking about -- what's actually, really, pushing the EU and various other countries around the world to demand that user data be kept in their own countries.

And you can bet your bottom dollar, euro, or ruble, it's not for the reasons they claim.

We have to begin with an unpleasant truth.

All countries spy. All of them. Every single one. No exceptions. They always have spied, they always will spy. Humans have been spying on each other since the caves.

And demands for "data localization" in reality have virtually nothing to do with privacy, and virtually everything to do with countries wanting to be sure that they can always spy on their own citizens and other residents.

Generally (but not always) intelligence and law enforcement services around the world draw some sort of (often muddy) line between domestic spying and spying on the activities of other countries. The rules and laws any given nation uses in-country can be different from their "beyond their borders" spying laws. In some countries, domestic spying is simply considered a normal police function, and in some nations the dividing line between law enforcement and intelligence agencies is nearly or completely nonexistent.

Even when regulations related to surveillance exist in an individual country, they are often officially ignored in many contexts, with nebulous "national security" concerns taking precedence.

Again it's important to emphasize: All countries spy. They spy to the maximal extent of their technical and financial abilities.

It has not been uncommon for nations to consider spying outside their borders to be a completely open game, not subject to any effective rules or limits. After all, those you're spying on out there aren't even your citizens!

But this is not to say that domestic spying isn't a major component of many countries' intelligence apparatus, and we're talking about entrenched domestic surveillance regimes in some countries outside the U.S. that make Edward Snowden's "revelations" about NSA look like a drop in the bucket.

Ironically, Snowden's new adopted home under the kindly influence of Czar Putin is one of the world's worst offenders in terms of domestic surveillance. China is another.

And coming up close behind is Europe.

The clues as to why Europe is now in this pitiful pantheon can be discerned clearly if you pay attention to what EU politicians and other EU officials have been saying publicly, even if we ignore the known revelations about their own spying activities.

Terrorism. It's on almost all their lips, almost all the time.

And this drives not only horrendous concepts like the EU (and now other countries) attempting to impose global censorship via "Right To Be Forgotten" (RTBF) regimes, but their demands for ever greater surveillance capabilities. Their rising tide of ostensible panic over strong encryption systems also plays into this same "rule by fear" mindset.

Which brings us back around to "safe harbour" and data localization.

The real reason you have countries demanding that the data of their citizens and other residents be stored in their own countries is to simplify access to that data by authorities in those countries, that is, for spying on their own people.

Notably, while U.S. authorities are indeed making a lot of noise trying to condemn strong encryption systems, you don't see serious calls for U.S. residents' data to be stored only on U.S. servers.

So what's the deal with the EU, and Russia, and various other countries about data localization? Clearly, having the servers in-country doesn't increase privacy -- it merely provides easier physical access to those servers and their associated networking infrastructures for law enforcement and intelligence operations.

True privacy protection isn't based on where data is located, but on the privacy policies and technologies of the firms maintaining that data, no matter where it physically resides.

So in many ways it's the EU/Russian politicos' worst data nightmare to have user data stored by companies like Google who won't just hand it over on any weak pretext, who are implementing ever stronger encryption systems, and who have incredibly strict rules and protections regarding access to user data -- and in particular regarding the legal processes required for access to that data by governments or other outside parties.

I'll note here once again that NSA or other U.S. intelligence agencies never had the ability to go rummaging around in Google servers as some of the early out-of-context clickbait claims of Snowden were inaccurately touted to imply. I've seen how Google handles these areas internally. I know many of the Googlers responsible for these systems and processes. If you don't want to believe Google or myself on this, that's your prerogative -- but you'd be wrong.

But in many other countries, law enforcement or intelligence services can get physical access to servers in those nations without any significant legal process at all -- just a nod and a wink, if that much.

That, dear friends, is what's actually going on. That's what exposes the big, big, big lie of data localization demands.

It's not about privacy. It's exactly the opposite. It's all about spying on your own people. It's about censorship. It's about control.

And like it or not, that is the sad and sordid truth.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 11:12 AM | Permalink

October 06, 2015

Google's New "Now on Tap" Brings Powerful Features -- and Interesting Privacy Issues

Google's new capability in Android M ("Marshmallow") includes a much anticipated capability called "Now on Tap" (NoT). I'm suddenly receiving a lot of privacy-related queries about it. Let's see if we can clear some of this up a bit.

Essentially, "NoT" permits the user to ask Google to provide more information related to the current screen you're looking at in an Android (even non-Google) app. It's similar in some respects to Google's "Now" cards that can (if enabled) provide more information based on your web browsing history, searches, and data explicitly shared with Google by apps -- if these functions are also enabled, of course.

NoT however takes another big step -- it actually can "read" what's on your screen being displayed by an Android app, and provide you with additional information based on that data.

Obviously to do this, data about what you're looking at needs to be sent to Google for analysis.

The way this is being done -- and this is very early discussion here based on the information I have at hand -- seems to be quite well thought out.

For example, you have to opt-in to NoT in the first place. Once you're in, data from your current screen is only sent to Google for NoT processing when you long-press the Home button on your device.

My current understanding is that both text and screenshots are sent for analysis by default if you have NoT enabled -- logical given today's display methodologies. In fact, buried in the settings you apparently can actually choose alternate providers of such services, and whether or not you wanted to send text-only or text plus screenshots.

So clearly a lot of deep thinking went into this. And make no mistake about it, NoT is very important to Google, since it provides them with a way to participate in the otherwise largely "walled garden" ecosystem of non-Google apps.

Still, there are some issues here that will be of especial importance to anyone who works with sensitive data, particularly if you're constrained by specific legal requirements (e.g., lawyers, HIPAA, etc.)

Some of this is similar to considerations when using Google's optional "data saver" functions for Android and Chrome, which route most of your non-SSL data through Google servers to provide data compression functionalities.

Fundamentally, there are some users in some professions who simply cannot risk -- from a legal standpoint if nothing else -- sending data to third parties accidentally or otherwise unexpectedly.

In the case, of NoT, for example, my understanding is that when a user long-presses home to send, the entire current view is sent to Google, which can include data that is not currently visible to the user (e.g. materials that you'd need to scroll down to view). This could, for example, include financial information, personal and business emails, and so on. And while NoT reportedly includes a function to allow app developers to prevent its use (in financial apps, let's say) this requires that the app developers actual compile in this function.

Another issue is that -- at the moment -- I am unclear as to all of the privacy policy aspects that apply to NoT -- how long is related data retained, what are all the ways it can be used now or perhaps later, etc. I don't expect problems in this area, but I am trying to get much more detailed information than I currently have seen in this context.

So overall, my quickie executive summary at this point suggests that Now on Tap will be a very useful feature for vast numbers of users -- but it won't be appropriate for everyone. Especially in corporate environments or other situations where sending data to third parties would be considered inappropriate irrespective of those third-parties' privacy policies, due consideration of these various capabilities and issues would be strongly recommended.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:36 PM | Permalink

September 30, 2015

How ISPs Will Royally Sucker the Internet, Thanks to Ad Blocking

Largely lost in the current controversies about users blocking ads from websites is a dirty little secret -- users are about to be played for suckers by the dominant ISPs around the world, and ad blocking will be the "camel's nose under the tent" that makes these ISPs' ultimate wet dreams of total control over Internet content come true at last.

There have been a number of clues already, with one particularly notable new one today.

The big red flashing warning light is the fact that in some cases it's possible for firms to buy their way past ad blockers -- proving demonstrably that what's really going on is that these ad blocking firms want a piece of the advertising pie -- while all the time they wax poetic propaganda about how much they hate -- simply hate! -- all those ads.

But these guys are just clowns compared to the big boys -- the dominant ISPs around the world.

And those ISPs have for so very long wanted their slices of that same pie. They want the money coming, going, in and out -- as SBC's CEO Edward Whitacre noted back in 2005 during their takeover of AT&T, when he famously asked "Why should [Internet sites] be allowed to use [my] pipes for free?" -- conveniently ignoring the fact that his subscribers were already paying him for Internet access to websites.

Now -- today -- ISPs sense that it's finally time to plunge their fangs into the Net's jugular, to really get the blood gushing out into deep scarlet pools of money.

Mobile operator Digicel announced today that they intend to block advertising (except for some local advertisers) on their networks across the South Pacific and Caribbean, unless -- you guessed it -- websites pay them to let their ads through.

And while their claimed targets are Google, Facebook, Yahoo, and the other major players, you know that it will never stop there, and ultimately millions of small businesses and other small websites -- many of them one person operations, often not even commercial -- who depend on those ads will be decimated.

Germany's Deutsche Telekom is known to have been "toying" with the same concept, and you can be sure that many other ISPs are as well. They're not interested in "protecting" users from ads -- they're all about control and extorting money from both sides of the game -- their subscribers and the sites those subscribers need to access.

Where this all likely leads is unfortunately very clear. No crystal ball required.

Some sites will block ISPs who try this game. Broad use of SSL will limit some of these ISPs' more rudimentary efforts to manipulate the data flows between sites and subscribers. Technology will advance quickly to move ads "inline" to content servers, making them much more difficult to effectively block.

But right now, firms such as Israeli startup Shine Technologies are moving aggressively to promote carrier level blocking systems to feed ISP greed.

Yet this isn't the worst of it. Because once ISPs have a taste of the control, power, and money - money - money that comes with micromanagement of their subscribers' Internet access and usage, the next step is obvious, especially in countries where strong net neutrality protections are not in place or are at risk of being repealed with the next administration.

Perhaps you remember a joke ad that was floating around some years ago, showing a purported price list for a future ISP -- with different prices depending on which Internet sites you wanted to access. Pay X dollars more a month to your ISP if you want to be permitted to reach Google. Pay Y dollars more a month for Facebook access. Another Z dollars a month for permission from your ISP to connect to Netflix. And so on.

It seemed pretty funny at the time.

It's not so funny now -- because it's the next logical step after ISP attempts at ad blocking. And in fact, blocking entire sites is technically usually far easier than trying to only block ads related to particular sites -- most users won't know about workarounds like proxies and VPNs, and the ISPs can try block those as well.

These are the kinds of nightmarish outcomes we can look forward to as a consequence of tampering with the Internet's original end-to-end model, especially at the ISP level.

It's a road to even more riches for the dominant ISPs, ever higher prices for their subscribers, and the ruin of vast numbers of websites, especially smaller ones with limited income sources.

It's the path to an Internet that closely resembles the vast wasteland that is cable TV today. And it's no coincidence that the dominant ISPs, frantic over fears of their control being subverted by so-called cable TV "cord cutters" moving to the Internet alone, now hope to remake the Internet itself in the image of cable TV's most hideous, anti-consumer attributes.

Nope, you don't need a Tarot deck or a Ouija board to see the future of the Internet these days, if the current patterns remain on their present course.

Whether or not our Internet actually remains on this grievous path, is of course ultimately in our hands.

But are we up to the challenge? Or are we suckers, after all?

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 03:47 PM | Permalink

September 28, 2015

Law Enforcement's Love/Hate Relationship with Cloud Auto Backup

There's a story going around today regarding an individual who was arrested and charged with assaulting a police officer when authorities arrived over a noise complaint. But cellphone video recorded by the arrestee convinced a judge that police had assaulted him, not the other way around. What's particularly unusual in this case is that the arrestee's cellphone had "mysteriously" vanished at the police station before any video was discovered.

So how was the exonerating video ultimately resurrected? Turns out it was saved up on Google servers via the phone's enabled auto backup system. So the phone's physical vanishing did not prevent the video from being saved to help prevent a serious miscarriage of justice.

Lawyers and law enforcement personnel around the world are probably considering this story carefully tonight, and they're likely to realize that such automatic backup capabilities may be double-edged swords.

On one hand, abusive cops can't depend on destroying evidence by making cellphones disappear or be "accidentally" crushed under a boot. Evidence favorable to the defendant might still be up on cloud servers, ready to reappear at any time.

But this also means that we may likely also expect to see increasing numbers of subpoenas triggered by law enforcement, lawyers, government agencies, and other interested parties, wanting to go on fishing expeditions through suspects' cloud accounts in the hopes of finding incriminating photographic or video evidence that might have been auto-backed up without the knowledge or realization of the suspects.

While few would argue that guilty suspects should go free, there is more at stake here.

The fact of such fishing expeditions being possible may dissuade many persons from enabling photo/video auto backup systems in the first place -- not because they plan to commit crimes, but just based on relatively vague privacy concerns. Even if the vast majority of honest persons would have no realistic chance of being targeted by the government for such a cloud search, an emotional factor is likely to be real for many innocent persons nonetheless.

And of course, if you've turned off auto backup due to such concerns, video or other data that might otherwise have been available to save the day at some point in the future, instead may not be available at all.

Adding to the complexities of this calculus is the fact that most uploaded videos or photos on these advanced systems are not subject to the kind of strong end-to-end encryption that has been the focus of ongoing controversies regarding proposed "back door" access to encrypted user data by authorities.

Obviously, for photos or videos to be processed in the typical manner by service providers, they will be stored in the clear -- not encrypted -- at various stages of the service ecosystem, at least temporarily.

What this all amounts to is that we're on the cusp of a brave new world when it comes to photos and videos automatically being protected in the cloud, and sometimes being unexpectedly available as a result.

The issues involved will be complicated both technically and legally, and we have only really begun to consider their ramifications, especially in relationship to escalating demands by authorities for access to user data of all kinds in many contexts.

Fasten your seatbelts.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 08:05 PM | Permalink

September 19, 2015

You'll Probably Hate this Posting about Ad Blockers and Ad Blocking

This is a discussion that I really wish didn't need to take place at all.

But we're here, and while understanding how we got to this point is obviously crucially important, mapping the way forward is even more of a priority.

By now you may know that I've taken a rather hard -- and in some quarters quite unpopular -- stance against ad blockers and ad blocking.

Luckily, I'm not running a popularity contest. So I want to briefly explain some aspects of my reasoning on this.

I'm not claiming any brilliant philosophical insights, but I do perhaps bring two aspects to the table of some value. One is historical perspective, thanks to having been hanging around the Net pretty much since its beginning.

The second aspect is the continual flood of unsolicited email (and sometimes phone!) queries that I receive about Google, broader Internet issues, and other tech-related topics. This provides me with an enormous amount of data concerning Internet users' thinking and worries. It's all self-selected of course -- so cannot be used for statistically valid extrapolations -- but it does cover the gamut in useful ways.

The ad blocking crisis -- and I do believe we are now on the cusp of a true crisis in this regard -- has been long coming.

There's no denying that in many ways Web ads have flown out of control. People used to complain about relatively lightweight static banner ads. But the rise of large, pre-loaded (and perhaps the worst sin of all, autoplaying with audio) full-motion video ads was the straw that broke the camel's back for many users. Browser developers have moved rapidly to provide their own mechanisms to prevent those from suddenly blaring out of your speakers unexpectedly, but there's no denying the existence of an "arms race" in ads, particularly from less savory sources.

But we get into trouble rapidly if we try treat all ads and all ad networks as being inherently evil, and the collateral damage to the forces of "goodness and niceness" (as Maxwell Smart used to say) can be devastating.

Because all ad networks and all ads are definitely NOT created equal.

And despite the statements of many ad blocking proponents who claim to only be concerned about "bad" and "misbehaving" ads, or slower page load speeds, or ad-enabled malware, my view is that in most cases these claims -- and the circumstances that flow from them -- are both cavalier and hypocritical.

Email I've been receiving on this topic over the last few days has broken down mainly into two categories.

First, there are the small websites, often one-person sites, or husband and wife, who operate on essentially a "hobby" basis and are terrified of losing even the relatively small amount they receive from ads that help them keep their heads above water and the websites on the air.

Ad blocking proponents by and large are taking a remarkably evil attitude toward such sites, saying things like "if you can't find other ways to make money go out of business" -- or much stronger language.

Outside of the fact that many of these sites aren't even businesses in the first place, just informational and/or fun hobby sites, the reality is that replacement income models for the existing ad regimes do not exist for most of these websites in a practical sense.

Rupert Murdoch and other giant media conglomerates will find ways to adjust and survive, but for the little guys the situation is much more bleak.

Paywalled subscription models are utterly impractical for most of them -- the uptake would be minuscule. Micropayment systems have been a parade of failures, and none exist today with sufficient reach to be of any value at all in these circumstances, even assuming enough people would bother signing up to pay through them in the first place -- a highly doubtful proposition.

This "let the little sites die" attitude on the part of so many ad blocker fans seems most odd given that many of these same people and groups have long at least paid lip service to the concept of diversity on the Web. They've complained that the "big guys" have all the advantages -- even as blocking advocates push a tech that would inevitably funnel an even larger percentage of Net revenues to the media giants as small sites are starved out of existence.

Nor do these proponents seem to care about Internet users who do not have the disposable income to pay actual cash to access sites that they formerly got for free via ads.

Remarkably, at the same time they complain about "walled gardens" or "in-app purchase abuse" -- blocking proponents advocate a blocking regime that will potentially wreck the key aspect of the Net -- open websites themselves -- that have been the one most dependable aspect of open information on the Net since the dawn of the first websites.

And claims that some new revenue mechanism will come along to save small sites sound to my ears like suggesting you'll come up with a cure for the patient after they're dead -- so "nothing to worry about, right?" -- wrong!

Apple's new iOS 9 ad blocking push threatens to be the inflection point that transforms ad blocking from a relatively niche application class to much more of a default situation.

And let's be clear about this. While Apple's actions have been widely characterized as an assault against Google, they can also be viewed as even more of an assault on the entire Internet and the ability to access information openly without sites having to pay Apple for the privilege of reaching users.

Already, "Wired" has published an article that explicitly can only be viewed today if you have an iPhone running iOS 9!

Which brings us to the second category of relevant email I've been receiving lately -- messages from the ad blocking proponents themselves -- many of whom insist that they are technically competent and only would block "bad" ads -- not ads that they personally found to be acceptable and pure of heart.

I don't believe most of them, because in so many cases there's an implicit (or even explicit!) subtext that they feel somehow "privileged" and above the fray, deserving of getting everything they want for free. And yes, many of these ad block proponents are launching into "information wants to be free" tirades that cover reading websites while blocking ads, stealing music and movies, and all the rest.

These ad blocking groupies also tend to make propagandist, false statements about the tracking and ad targeting models associated with ad networks, failing to note that the reputable networks maintain user anonymity in their systems, don't sell user data to third parties, and are vastly more protective of user data than your friendly bank or credit card company who often happily sell fully-identified -- not anonymous! -- data to third parties in enormous quantities.

But let's leave these "technically competent" ad blocker fans aside for the moment.

Because as ad blocking rapidly goes mainstream and even installed by default, the majority of users are never going to change the ad blocker settings to let "good" ads through.

What's more, you can be sure that the most popular ad blockers will be the ones that attempt to block ALL ads, just as a cable TV channel with commercials would quickly be abandoned for a channel with the same programming without commercials.

Already we've seen the author of a blocking app that had over several days become the most popular application in the Apple App Store actually and admirably withdraw it, expressing what we could call "developer's remorse" over the collateral damage his app could do. But plenty of blocking apps written by far less ethical authors were ready and waiting to take up the slack.

For sites without other income possibilities, there are a number of ways they can try fight back, all of them unfortunate.

They can try block users who are using ad blockers. Some sites are already doing this (including some major sites on some materials). They could dramatically slow down page speeds to users with blockers.

They could start running sleazy paid "native advertising" -- fake articles that are actually paid placements and would be unblockable by conventional ad blockers, causing users to effectively trade ads that they knew were ads for ads that they probably won't realize are ads at all.

My guess though is that associated pleas to users to turn off ad blockers will meet with deaf ears. Most people won't bother, but will still express endless indignation as their personally favorite small sites gradually wink out of existence, along with most of the Web's diversity.

I don't have a magic wand solution to any of this, but I will openly admit that the pervasive hypocrisy I hear from some of the most vocal proponents of ad blocking strike me as deeply selfish and ugly.

Yesterday I created a new Google+ community to discuss these issues, and hopefully to perhaps perceive the start of a path toward workable and practical solutions. It's at:


and you're most welcome to join the discussion there.

In the meantime, please keep in mind that the ads you block may very well be paying one way or another for the content that you and many other people most care about.

Remember, we still don't know how to put Humpty Dumpty back together again.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 11:05 AM | Permalink

September 18, 2015

New G+ Community: Ad Blocking Policy Discussions

The widespread use of Web ad blocking technologies carries immense implications for the future of the Web in particular and the Internet in general. While alternative funding models exist for some (especially larger, corporate media) sites, many smaller and/or independent sites do not have alternatives to advertising for even paying their basic bills, risking an enormous loss of diversity on the Web.

Let's discuss the issues:



Posted by Lauren at 02:15 PM | Permalink

September 11, 2015

Why We Positively, Absolutely, Can't Trust the Government with Encryption

By now you're hopefully aware that the U.S. federal government is engaged in a major effort to pressure technology firms like Google and Apple to provide "backdoors" into encryption systems (particularly for mobile devices) that are increasingly designed so that the firms themselves cannot even decrypt the data without cooperation from the devices' owners. Simultaneously, there are efforts to pressure Congress into mandating such backdoors if the firms refuse to voluntarily cooperate.

Despite the fact that essentially every reputable security, encryption, and privacy expert agrees that it is technically impossible to design such a backdoor that would not massively increase the potential for black-hat hacking -- and so dramatically decrease the security of these systems -- law enforcement continues to imply that if you don't see things their way -- well, perhaps you're not a loyal American.

This was very nearly stated explicitly by the FBI and CIA directors at the Intelligence and National Security Summit in Washington yesterday, where the men bemoaned negative public opinion, "deep cynicism," and "venom" directed at the backdoor access plans -- with CIA Director John Brennan suggesting that persons promulgating these views "may be fueled by our adversaries."

Mr. Brennan's remark is reminiscent of President Richard Nixon's paranoid delusions that antiwar Vietnam protesters were all the puppets of ghostly Communist agents.

Well, Mr. Brennan, let me help set you straight regarding your comment, which I believe many of us in the technology community find to be extremely misguided and offensive.

We don't have any foreign masters. We simply don't trust you.

And it's not just you. Almost everywhere we look at the intersection of technology and any agencies involved even peripherally in law enforcement activities, there's a long list of lies, errors, mismanagement, screw-ups, and abuses galore.

It's an ironic situation to be sure, given that the technology displaying these very words at this very moment can trace their ancestry to a Department of Defense computer networking project.

But the sad truth is that at every level of government, no matter whether Democrats or Republicans are in power, it's generally the same story.

It starts at the local level, with municipalities lying to citizens about red light cameras, license plate readers, cellphone interceptors, and other police surveillance systems.

At the state level it moves up to abuse and foul-ups of DMV databases and more.

And at the federal level the list is almost too long to even begin.

The recently revealed Office of Personnel Management hack exposed the personal data -- including sensitive security clearance applications and related forms -- of perhaps four million people or more. A 29-year-old contractor waltzes out of NSA with a thumb drive filled with reams of the agency's most sensitive documents.

No -- Mr. CIA Director and Mr. FBI Director -- you're not going to sell us your foreign influence bogeymen this time.

We simply believe that we cannot trust government agencies to have the honesty and competency to be entrusted with keys to our own encryption -- the security of which is rapidly becoming a fundamental requirement of our day-to-day lives.

Frankly, even if there were a magic wand that could create that impossible backdoor system in a seemingly secure and safe manner -- we still wouldn't and couldn't entrust you not to find avenues to abuse it.

This is overall a very unfortunate state of affairs, because yes, we know that encryption may be leveraged for evil in very serious ways.

But you still can't get blood out of a stone.

The technical reality is that the kinds of encryption backdoors you want cannot be made secure and would themselves represent horrific security risks.

Perhaps someday you'll find ways to earn back our trust. But all the trust in the world won't change the technical realities that make encryption backdoors a non-starter.

And the sooner you understand these truths, the better it will be for us all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 06:58 PM | Permalink

August 22, 2015

Why "Godwin's Law" Doesn't Apply to Donald Trump

Let's get this straight once and for all: Comparisons between Adolf Hitler and Donald Trump do not invoke Godwin's Law.

Godwin's Law applies to discussions where Nazi analogies make no sense. Comparing a strict physical education teacher with Hitler, for example, is an obvious invocation of Godwin's Law.

However, Godwin's Law explicitly does not apply when actual Nazi parallels are in play.

In the case of Donald Trump, we have a grandiose buffoon spouting outright lies and hate speech, triggering racial violence, demanding the deportation of eleven million plus people including American citizens, retroactive stripping of citizenship, and attracting crowds who shout "white power" and hand out literature lauding that "Trump will do to the dirty Hispanics what Hitler did to the dirty Jews."

The parallels are obvious and on-point.

Godwin is not in scope.

Nazism and 1930s Germany very much are.



Posted by Lauren at 09:36 AM | Permalink

August 20, 2015

EU Demands Google Forget "The Right To Be Forgotten"

Brussels, Belgium (ZAP) - The European Union today issued a preliminary order requiring that Google and all other Search Engines and similar services remove all search results related to the EU "Right To Be Forgotten" (RTBF).

"We've been deliberating on this issue for a very long time," noted Winston Charrington, Minister of the European Union World Censorship Directorate. "We've come to the conclusion that only by mandating the complete and total global elimination of all references to RTBF can we avoid unnecessary consternation and controversies regarding those aspects of published history -- that RTBF requires be deleted from search indexes around the planet. In other words, if you don't even realize that censorship is occurring, how could you ever be upset about it? Doubleplusgood!"

Leaders and politicians from around the world were quick to praise the EU's action. Russian president Vladimir Putin issued a statement noting the EU was acting in the best historical traditions of Mother Russia. Chinese leaders offered to provide the EU with "Great Firewall" censorship technology at no charge, "in the furtherance of helping our brothers and sisters in Europe join our information control people's paradise."

GOP presidential candidate Donald Trump immediately called FOX News to say that the EU's actions are a crude start but adding that, "When I'm president you're going to have a really wonderful censorship system here in the USA. It's going to make those Russian and European systems look like stupid, ugly women. You're going to forget there ever were mass arrests and deportations here. I know how to do censorship. You're going to love the Trump censorship system!"

An EU spokesperson noted that upon finalization of this global RTBF censorship order, all search and other references to articles, stories, or other materials describing this order, including this posting, would be retroactively deleted.

Google was unavailable for comment.

- - -

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 05:22 PM | Permalink

August 14, 2015

Why the "Right To Be Forgotten" is the Worst Kind of Censorship

Let's start from a foundational premise on which we hopefully can all agree.

Our abilities to interpret and understand the world around us are predicated on the availability of information.

In the far past, that information was usually entirely based on what we could sense directly or were told by others. Later, written and the printed materials vastly expanded our information reach, both in terms of space and distance, in terms of time and history.

Today it's inarguable that the Internet is the key to our information knowledge, and in the absence of a global catastrophe, it seems slated to expand rapidly in that role.

I define censorship as attempts by governments to control the dissemination of information by third parties, usually backed up with civil and/or criminal sanctions.

Censorship (and attempts at censorship) have likely existed back to the dawn of civilization, and it's been a preferred tool of control by rulers and governments ever since.

In the early days of broad Internet expansion, there was a popular -- though I would assert rather naive -- view that the coming of the Net would sweep away national governments and bring about a utopia of open communications.

Of course that's not exactly what happened.

While domestic governments were generally slow on the uptake to understand the power of the Internet, once it really showed up on their radars many moved rapidly to muzzle that power into traditional censorship realms, with China and Russia leading the way.

It is often said -- I've said it myself -- that's it's nearly impossible to completely censor information on the Net, that the ease of mirroring and the variety of bypass mechanisms available make total blockades enormously difficult.

This is true. But there are provisos to that truth that aren't stated as frequently.

One of these is that even if you can't completely censor particular information, governments can often make it such a hassle -- or so personally dangerous -- to pursue accessing that information as to effectively terrify all but the bravest (or in some cases perhaps the most foolhardy) of their population into submission.

So perhaps you can use a VPN to get at the webpage the government doesn't want you to see. But if you're caught at it, are you willing to risk having your entire family arrested, perhaps beaten, and then spending the next 20 years shackled in a dungeon cell?

The traditional techniques of government oppression have definitely maintained much of their power, even in the Internet age.

But at least in most of these cases you know that the forbidden information exists. You are aware of what the government is trying to block from you.

Which brings us to the second proviso from the truth about censorship.

In true Orwellian fashion, even better than blocking people from information is preventing them from ever realizing that the forbidden information exists in the first place.

And this is where the so-called "Right To Be Forgotten" (RTBF) comes into play.

The key premise of RTBF is that if you can prevent your population from realizing that particular data exists on the Web -- even if they could easily access it given such knowledge -- you've achieved censorship Valhalla.

This is why RTBF focuses its death ray on search engines. Governments realize the typical impracticality of excising all copies of information from all possible Internet sources. So they instead order the burning of the search results "index cards" in a deeply disingenuous attempt to fool their populations into not realizing the associated materials exist at all.

Supporters of RTBF concepts bizarrely attempt to claim that RTBF is not actually censorship, since usually the materials at issue still exist somewhere out on the Web.

But this is deeply cynical and, yes, evil. It's like saying that a child has been locked into a safe, and all you need to save them is to guess the combination.

RTBF proponents also prefer to frame their arguments in terms of early European Union RTBF efforts involving "ordinary" individuals.

But already we're seeing the steepness of the slippery slope of their RTBF.

The EU has already made it clear that not only do they want to censor the Net within their own borders, they want to be global censorship czars. They've said that search engine results they've banned should be removed from global indexes, not just from the localized versions that the vast majority of their population uses.

In a particularly outlandish twist, there have even been EU suggestions that search engines be required by law to specifically identify EU citizens as they travel, so that EU censorship edicts can be applied to them no matter where on the globe they may access the Net.

This isn't simply theoretical. France has already demanded that Google apply French RTBF takedowns around the planet, giving France the ability to control what users everywhere else in the world can see. Google is quite appropriately resisting this horrific edict.

And that's just in Europe.

Elsewhere, democratic and totalitarian governments alike are lining up to try impose their own RTBF censorship on the entire Earth. Putin's Russia has already passed such legislation, even broader and more dangerous than the awful EU variety.

Putin as a global censor? Chinese leaders as global censors?

It's bad enough when Western democracies fall into this trap, but the rush to a lowest common denominator of "acceptable" information that would be triggered by totalitarian leaders exerting such power would be nightmarishly breathtaking to behold.

There is no practical way to proverbially "dip your toe" into RTBF censorship, without ending up quickly and totally submerged and drowning. It's like being "a little bit" pregnant, or setting a match to a piece of flash paper.

Making it crystal clear to our legislatures and political leaders that we will not accept these censorship regimes is absolutely crucial to our civil liberties -- in fact, even to our knowledge going forward of what civil liberties actually are!

This will be an enormously difficult battle, because censorship is very much the natural ally of governments and of politicians.

But if we lose this battle, this war on our basic freedoms, it's very possible that someday -- perhaps not in the very distant future at all -- even these very words you're reading right now may be impossible to ever find again.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 01:19 PM | Permalink

August 09, 2015

It's Time to Go Nuclear Against DMCA Abuse

OK boys and girls. That's the last straw. The straw that broke the camel's back. The jumping of the shark.

It's the end of the line for playing nice regarding entertainment company abuse of the Digital Millennium Copyright Act (DMCA).

The DMCA is like the weather. Everyone talks about it, but nobody seems to do anything very effective about it.

Of course that's not really completely true. There are entities out there trying to change it -- and most of them want to replace it with something even worse -- a draconian mandate for search engines to act as "censorship agents on demand" for the entertainment behemoths.

Granted, the DMCA is a double-edged sword. In key respects, it's aspects of the DMCA that have permitted services like YouTube to exist in the first place, by creating a regime that is significantly enabled by a range of imperfect but ever evolving dispute resolution processes.

But overall the DMCA still remains massively skewed to favor the giant entertainment conglomerates, with its "guilty until proven innocent" model that is a recipe for enormous corporate abuse at the often literal expense of the little guys.

And we now have a new example of this corporate DMCA abuse that is so pure and clear in its stupidity as to once and for all demonstrate that the DMCA imbalance needs to be corrected -- right now.

Adam Sandler's new film "Pixels" has been a horrendous flop. But it may have done some good after all, by demonstrating the lack of regard for accuracy that has become the status quo in DMCA takedown orders -- which, we must remember, are required by the DMCA to be accepted as factual until proven otherwise.

As we learn from:


a massive takedown campaign attacked Vimeo demanding the removal of essentially every video that contained the word "pixels" in its title!

You can imagine the results.

All manner of videos were (as required by law) blocked by Vimeo on the basis of those takedown orders, including totally and utterly unrelated materials that had committed the "crime" of ever using the word "pixels" in their titles -- and (ironically) even the trailer for Sandler's movie itself.

Thank goodness the producers of the various films named "She" over the years didn't try this stunt. Or how about a move titled "The" for real chuckles?

The impact of such takedown abuse is indeed the Internet equivalent of saturation bombing -- with no consideration given to the innocent parties who will be affected, and in the case of the DMCA, then have to find the time and money to fight back against this abuse -- simply to get their videos back on the Net.

Again, it's the fundamental imbalances of the DMCA that allow this, because there is essentially no cost involved in filing massively overbroad and sloppy DMCA orders. All the power is on the side of the traditional entertainment conglomerates, and they generally don't care how many ordinary folks get hurt in the process.

The righteousness of an appropriate "nuclear option" to provide some balance is obvious. And the basic structure of that "weapon" seems relatively straightforward to visualize.

We must make it expensive with a capital "E" to voluntarily file mass DMCA takedowns that are sloppy, haphazard, and likely to negatively impact significant numbers of innocent parties.

It has to cost. It has to cost big time.

Such abuse has to be made so expensive that even the entertainment industry moguls with the gold-plated toilet seats will start to feel the pain.

We can argue about the order of magnitude for these fines and how they should be assigned, but in the final analysis the totals must be so large that nobody in their right mind would willingly issue an indiscriminate large-scale DMCA takedown ever again.

I'd suggest that these determinations would best be made by some independent body, to make decisions regarding accidental error vis-a-vis purposeful culpability, and to assign the fines to be paid.

There are also other ways we could ultimately reach the same necessary DMCA balance, but inaction is no longer a viable alternative.

Legitimate rights holders should be able to appropriately protect those rights -- but not by muzzling -- and steamrolling over -- an array of innocent parties.

It's time to fight back. Enough is enough.


Posted by Lauren at 04:41 PM | Permalink

     Privacy Policy