November 24, 2015

The Three Letter Cure for Web Accessibility and Discrimination Problems

A few months ago, in "UI Fail: How Our User Interfaces Help to Ruin Lives" (, I discussed the many ways that modern Web and app interfaces can be frustrating, useless, and even painful for vast numbers of users who don't fit the "majority" category for which app and Web designers tend to build their user interfaces.

This doesn't just include rapidly growing population segments such as older users, but pretty much anyone without perfect vision and motor skills, or involving the many varying levels of literacy, or slow and unreliable Internet connectivity. It's a long, long list.

The topic is in focus again with the introduction of Google's new version of Google+, which so far, frankly, is an accessibility disappointment, in key respects seemingly moving backwards.

However, the G+ redesign is not my focus for today, since it is currently opt-in (and easily opt-ed out later if desired) and is very much a work in progress as the G+ team receives user feedback. There have already been some associated improvements in that regard, and I hope to see more. So a detailed discussion of the new G+ UI can wait for now.

What can't wait are the overall problems of how user interfaces can easily create accessibility problems and resultant discrimination against vast numbers of users and potential users.

This is by no means a Google-only problem -- it cuts across the entire Internet industry.

And in most cases it's not a purposeful decision to discriminate -- it's the natural outgrowth of mostly relatively young persons (who else could keep up with the coding loads fueled mainly by strong coffee!?) who build interfaces that they believe will well serve the "bulk" of their users.

Of course, at the scale of these companies, those users outside that "bulk" can represent hundreds of millions of individuals who can be easily left behind, users who in many cases could most benefit from these technologies.

An obvious question one might ask is why aren't there usually options on these interfaces to serve different kinds of users? Why not an option for higher contrast text? Or for easier scanning by audio screen readers, or larger targets for mouse clicking?

The short answer is that in many quarters of our industry, options per se are something of an anathema, to be avoided whenever possible.

Interface designers tend to feel that with enough effort and study data, they can create a singular interface to serve everyone (or at least, everyone they feel really matters at the moment).

What's more, options do add complexity of their own to UIs -- potentially confusing to users -- and can make code maintenance more difficult and expensive.

Yet "universal" interfaces are increasingly showing their fundamental limitations.

Do we need to invent some sort of new technology to solve this problem?


Because the solution, while not a panacea in and of itself, already exists.

That solution is the A-P-I: Application Programming Interface.

Simply put, APIs are mechanisms through which programs and systems permit other programs and systems access to various of their internal functions.

APIs already keep much of the Net running, one way or another.

A visit to Google's API Explorer (") yields a long list involving ads, maps, email, files, security, content, and much more -- all of which exist to permit third parties to write software that can access key aspects of Google systems and then read/write/process/display settings and data in direct conjunction with those third-party systems.

By now you likely see where I'm leading.

If stable, supported user interface API access were available for services like Google+ -- and the many other firms' systems around the Net that currently put users at an accessibility disadvantage -- it would be possible for third parties (commercial, nonprofit, individuals, etc.) to write their own customized interfaces for these services to meet specific accessibility needs.

Visually enhanced high contrast interfaces? An interface much easier for someone with limited motor skill acuity? There are a vast range of possibilities for customized interfaces to help an enormous number of users, all of which could operate via the same essential kinds of API mechanisms.

Without APIs, such customized interfaces are usually impractical. Attempts to create customization based on "screen scraping" and techniques like page display CSS modifications are subject to potentially breaking at any time, whenever the underlying format or structure of displayed pages are altered.

You must have stable user interface APIs to make this work.

And you can also probably guess why APIs for user interfaces are often resisted by firms.

Such APIs have to be built and maintained, and kept compatible as their mainstream interface and backend systems evolve and change.

There's also some loss of control. What's emphasized on a page. Where ads are placed (and how easily ads might be blocked). What's shown? What's hidden? Does any given API-based interface actually make things easier or more confusing for any given user? Can API-based interfaces be leveraged to falsify data or enable scams?

So the calculus around all this is decidedly nontrivial.

But these are all solvable issues, given the will and resources to do so. User interface API guidelines and usage standards can be promulgated. Methodologies for the inspection and certification for such API-based UIs are relatively straightforward to postulate.

One thing is certain. These accessibility issues must be solved. The status quo is increasingly untenable, leaves enormous numbers of persons in the dark, and potentially invites both litigation and heavy-handed regulatory actions.

Universal "one size fits all" user interfaces are no longer acceptable from the major Web players.

Properly designed and managed, APIs provide us with a practical and potentially highly fruitful way forward.

Let's get to work.

I have consulted to Google, but I am not currently doing so.
My opinions expressed here are mine alone.

Posted by Lauren at 10:18 AM | Permalink

November 23, 2015

Hobby Drone Task Force Snookered by FAA

The report of the FAA-mandated "Drone Task Force" is out -- and it appears that the good folks who offered to help the FAA in its rush to regulate hobby drones have been pretty thoroughly snookered. Not their fault, but that's the obvious result.

Considering that the federal government wants to register aircraft hobbyists when it doesn't register gun owners, you'd think any action would require careful deliberation. But the report indicates otherwise:

Everybody admits this is a terribly rushed job, with key aspects that should have been important to consideration steamrolled over and ignored -- as required by the FAA's inane and nonsensical timeline.

The task force hopes that this doesn't turn into an identity and privacy nightmare. There's no way to validate the identity of registrants except perhaps (in some cases) at point-of-sale for fully-assembled units purchased commercially. For the many other ways these devices are assembled, registered names and addresses could easily be fabricated out of whole cloth -- or perhaps simply registered using the name and address of that neighbor down the street whom you despise!

The task force hopes that the FAA can protect the information in the databases -- names, addresses, and often more -- from abuse, misuse, broad "freedom of information" requests, leaks, etc. -- but there's no guarantee that the FAA is willing to do this or could legally accomplish it.

And that doesn't even cover black hat hackers attacking a government that has shown itself -- repeatedly -- to be utterly incompetent to protect personal data in their databases. How long before the entire hobby drone database with all that personal information is floating around the Net to be abused?

Ignorance and ignoring of the registration requirement will be vast. The task force hopes that the FAA can do something to lower the statutory FAA fine structure (often currently exceeding $25K -- aimed at penalizing drug traffickers and the like) so that ordinary hobbyists aren't wiped out by obviously inappropriate fines. But again, the task force admits that they don't know if the FAA would want to make such changes, or if they legally could do so.

And of course, the folks that all this has ostensibly been aimed to catch -- irresponsible flyers, the theoretical cadre of "drone terrorists" -- and assorted other bad guys -- will as noted above either falsely register to evade identification (and/or to transfer blame to innocent parties) -- or simply won't register at all.

This is shaping up to be a quintessential example of USA regulatory processes at their very worst.

You can read the full report here:

Happy flying.

I have consulted to Google, but I am not currently doing so.
My opinions expressed here are mine alone.

Posted by Lauren at 03:13 PM | Permalink

October 22, 2015

YouTube Red to Creators: Join Us or Else?

The theme of queries in my inbox over the last 24 hours or so has definitely been related to Google's new "YouTube Red" offering.

In case you've been living in a cave without Internet service, Red is a new YouTube subscription tier aimed at providing ad-free videos and music. It's obviously a very important project to Google. There have been rumors about it for ages and it's been a long time coming.

There are a lot of questions and comments coming in, including from people questioning the idea of paying for what used to be free relating to the reportedly "exclusive content" aspect of Red, asking whether ad blockers render the entire concept of Red largely moot, and lots of other issues.

I'm not particularly concerned about the pricing right now, and as you probably already know I view ad blockers as essentially unethical (though I do agree that some Internet ad models have become incredibly obnoxious and intrusive -- a problem I prefer to see addressed from the ad creation and distribution side).

But the aspect of YouTube Red queries being sent to me that quickly caught my attention relates to existing monetizing YouTube creators -- YouTube Partners -- who feel that they were not adequately notified of this project and that they are being coerced into participating in Red.

I don't have all the facts yet and I'm trying to better understand the details.

The implication for now though seems to be that these loyal YouTube creators are being told by Google that if they are uninterested or unwilling to participate in the new Red program with its new terms, their existing YouTube videos will be changed to private status (and perhaps their entire YouTube channels as well) -- cutting them off from public viewing or participation.

A further implication appears to be that to proceed without participating in YouTube Red for whatever reasons, these creators would have to start from scratch. In other words, apparently -- and I'm trying confirm the accuracy of these claims I've received -- they cannot choose to take their channels public on a non-monetized or ordinary non-Partner monetized basis, and would have to start entirely new channels without any of their existing subscribers.

Loss of subscribers would be a very, very big deal for some of these creators who have spent years building up a following.

If this state of affairs is true, I do indeed find this aspect in particular to be quite disturbing.

Looking at it from what I presume is Google's point of view, YouTube wants to help ensure a reasonably uniform user experience, without confusion over why particular material would or would not appear with ads. I fully understand this.

On the other hand, if the situation actually does boil down to "agree to Red terms or you lose most of the work you've put into your YouTube channels up to now" -- well, that strikes me as fairly problematic both in an ethical sense and perhaps in a business sense as well, given the various competition to YouTube (especially from but not limited to Facebook) that appears to be rapidly developing.

So overall, this is my sense of the situation at the moment, based on what I know right now. As I noted above, I'm trying to get more details and find out how much of what I'm hearing about this is accurate, and of course I'll pass along what I find out.

As I've said many times, I'm a tremendous fan of YouTube. I consider it to be arguably the most important entertainment and educational video resource on the planet. I want it to continue succeeding.

But I really do hope that this can be done in a manner that is ethically fair to everyone concerned.

Be seeing you.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 09:31 AM | Permalink

October 20, 2015

When Facebook's "Real Name" Policies Can Kill

A week ago in:

Social Media Abuse Stories to Shrivel Your Soul

I discussed the wide range of serious problems resulting from social media "real names" identity requirements, with Facebook being the most prominent and important perpetrator of the resulting damage to users.

As I continue to receive associated stories in response to:

Research Request: Seeking Facebook or Other "Real Name" Identity Policy Abuse Stories

there's one particular category related to this area that is clearly among the most horrific. The same sorts of terrifying details are being related to me again and again by different persons, and it doesn't take a genius to see the patterns in play.

We begin with a truism: There are vast numbers of frequent Facebook users who actually hate using it, or at the very least are ambivalent. They use FB for one reason and one reason only -- they've found it to be the only practical way to stay in touch with their peer groups or families who have become dependent on the FB model, and they face being effectively cut off from communications with them (or at least being accused of not caring about them) if they don't follow through with the FB grind almost every day. They feel stuck and trapped. Essentially, they are FB users under duress.

Now we combine this sad fact with another fact. Facebook's "real name" policy isn't actually a real name policy at all. It's a "names Facebook feels look pretty much real according to Facebook standards" policy.

Obviously most persons would be unwilling to hand over their driver's license, Social Security Number, and/or credit card information to establish a FB account. And absent that kind of verification, FB usually has no clue whether the "real looking name" you signed up with is actually your name or not.

A farce? Yep, we could definitely call it that.

And it's a very dangerous farce, indeed.

In fact, essentially the only time that Facebook demands actual proof of identity documents is when the name you've chosen to use on your account either doesn't look like what Facebook considers to be a real name -- or when the name you chose (that typically does appear real) is reported by some other user as potentially a pseudonym in violation of FB rules.

It's this latter case that terrifies many innocent users, that has them living in fear of exposure every day, that gives their adversaries tremendous power over them, and that could actually result in people being injured or killed.

Because one of the most frequent reasons for choosing pseudonyms on Facebook is the completely valid concerns of already vulnerable and victimized persons who feel that they must continue to use FB to stay in contact with friends, families, or others, but for whom exposure of their real names could have devastating real-world consequences.

Estranged spouses, LGBT discrimination and other harassment victims, targets of sexual attacks, the prey of bullies -- the list goes on and on.

I've received reports of such vulnerable individuals being extorted by others, who have threatened to report their accounts to Facebook unless demands were met.

But irrespective of how or why such a person's profile is reported to Facebook's identity squad, the results are virtually always awful.

The targeted individuals are faced with an ultimate sort of "Hobson's choice" -- either be exposed on Facebook if they use their actual name that will subject them to further online and in many cases offline attacks -- or stop using Facebook entirely, cutting themselves off from their support structures and other people they care about. In theory they could sometimes try to create a new pseudonym -- with all the hassles involved with reestablishing contacts and online relationships -- but face the likely prospect of being right back in the same quagmire again in short order.

In practice, this is no choice at all for most persons in this position. They've been terrorized, and Facebook's policies not only set the stage for this abuse, but actively make it worse. Far worse.

As I've noted previously, law enforcement's usual response to these victims of intertwined online/offline violence is the epitome of callousness, generally recommending that victims simply stop using the Internet. A most ignorant and dangerous response.

In an ethical sense at least, it doesn't matter one damned iota what high percentage of Facebook and other social media users don't suffer from these sorts of abuses.

It's our jobs as the designers and maintainers of these systems to ensure to the maximal extent possible that they not become tools for the oppression and destruction of innocent, vulnerable persons.

We can either do this proactively and voluntarily, or wait for pandering politicians to make matters even worse by using these situations as an another excuse to push their own damaging censorship regimes.

If we can't get this right, we will have no valid defenses at all to charges of callousness, hypocrisy, and worse.

And we'll have nobody to blame but ourselves.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 09:45 AM | Permalink

October 13, 2015

Social Media Abuse Stories to Shrivel Your Soul

Recently in "Research Request: Seeking Facebook or Other 'Real Name' Identity Policy Abuse Stories" -- -- I requested that readers send me examples of social media abuses that have targeted themselves or persons they know, with an emphasis on "identity" issues such as those triggered by Facebook's "real name" policies.

These are continuing to pour in -- and please keep sending them -- but I wanted to provide a quick interim report.

Executive summary: Awful. Sickening. I knew some of these would be bad, but many are far worse than I had anticipated anyone being willing to send me. It seems very likely -- though obviously I couldn't swear to this under oath -- that these abuses have resulted in both suicides and homicides.

And if we as an industry don't get a handle on these issues, we ultimately risk draconian government crackdowns that will simply enable more government censorship and create even more problems.

Here are some of the more obvious observations I can derive from the messages I'm being sent (not in any particular order for now):

There is no longer any realistic dividing line between the online and offline worlds. Abuse taking place online can quickly spill offline, affecting targeted persons' physical lives directly and devastatingly.

Most forms of social media abuse are interconnected. That is, we cannot realistically demarcate between "identity policy" abuses (e.g., Facebook's "real name" requirements), and other forms of social media abuse (such as comment trolling, Gamergate, and far more).

Women are disproportionately targeted by social media abuse (as a male I find this fact to be personally offensive), but yes, many men are also attacked as well.

A lack of realistically useful and advanced moderation and abuse report/flagging tools, and/or insufficient surfacing of these tools to users, combined with "lackadaisical" (that's the most polite term I can use) attention to these reports in many cases, exacerbates existing problems.

Social media systems with strict "real name" requirements are especially problematic and can be extremely dangerous. This particularly relates to the 800-pound gorilla of Facebook in this context (Google+ wisely dropped its real name requirements quite a ways back).

Facebook's identity "real name" policies have been effectively "weaponized" by abusers. Many FB users who are already targeted and marginalized in their offline lives (domestic violence victims, LGBT, racial and religious minorities, and so many more) still need to use FB to stay in contact, but (in an attempt to protect themselves) are using "real appearing" pseudonyms instead of their real names. If one of their protagonists discovers their FB identity, it is not uncommon for the abuser to report the victim to FB (for example, as a twisted form of "revenge") in an attempt to expose them online and offline, and to destroy their ability to be safely online.

Social media firm reactions to flagging and abuse complaints -- particularly in the case of Facebook -- can be erratic and seemingly arbitrary. Complaints that in one instance might target an innocent person might cause an account suspension, but one targeting a guilty person may be ignored. Innocent parties may be required by FB to jump through a series of humiliating and embarrassing hoops to try regain access, including persons whose protective pseudonyms have been exposed and persons whose actual, real names have been falsely flagged as fakes. In some cases, Facebook actually suggests to affected users that they go to court and change their name legally to match FB's rules!

Governments in general (which tend to see censorship as a solution rather than the problem it actually is) and law enforcement in particular, usually make these matters worse, not better. The police tend to be clueless at best, and often explicitly "stop wasting our time" antagonistic. Victims of bullying and online threats to their offline lives who go to the police are usually informed that there's nothing to be done to help them, or victims are told to just "stop using the Internet" as a proposed (inane) solution.

We could go on with this list, but I'm sure you get the idea.

I'm forced to add that not all of the reaction to my research request on these topics has been positive. I've received some responses that attempt to minimize the entire controversy. They've told me I'm wasting my time. They've suggested that in a relative sense "so few" people are actually victimized by these problems (compared with the billions using these system) that it would be ridiculous for the companies involved to make significant changes just to cater to to a small group of actual victims and a much larger group of supposed malcontents.

I can't emphasize how forcefully I categorically reject that entire line of reasoning.

The inherent suggestions that because "relatively" few persons might be affected (and that still means vast numbers of warm bodies at these scales) could somehow excuse the abysmal status quo -- are entirely and completely unacceptable, untenable, and unethical.

It's true that we can't put precise numbers on the victims. After all, most of these vulnerable persons are already trying to protect themselves from exposure, being forced into essentially a "shadow" universe of social media identities. And we'd expect that most would also be understandably unwilling to discuss their situations with a stranger such as myself.

But many have been so willing, and I thank them for their trust. And I believe we can safely extrapolate to the reality that there are one hell of a lot of people being victimized by these issues.

And in fact, the numbers shouldn't really matter at all. How many deaths or lives otherwise ruined attributable at least significantly to social media abuses are tolerable? I would assert that the answer in an ethical sense at least is zero.

Does this mean we can quickly solve all these problems? Is there a magic wand?

Of course not. But that doesn't mean we shouldn't try. And remember, once politicians get their claws into these controversies, you can bet that the kinds of "solutions" they push will aim to further their agendas more than anything else.

These are problems we must ourselves work toward eliminating.

Obviously, education outreach must be a major part of this effort, especially to law enforcement and other government agencies.

But we also need to have a much better handle on these situations as an industry, because the problems are ultimately not isolated to single firms.

There need to be individuals and teams within the involved firms who not only are working internally on these issues, but who also participate broadly in related public communications efforts. These companies need to work together toward understanding the impacts of their ecosystems in these contexts -- a formal or informal industry consortium to specifically further such interactions would seem a useful concept for consideration.

Most of all, it's crucial that we as individuals -- not just those of us who have built and used the Internet for many years, but also users who have so far only barely gotten their feet wet on the Web -- recognize that it is intolerable for the Net to be turned into a tool for the destruction of lives, and that it's up to us to pave the path toward changes that will truly help the Net to flourish for the good of our societies, rather than allowing the Net (and ourselves) to be shackled by politically shortsighted restrictions.

Take care, all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 10:08 AM | Permalink

October 08, 2015

Research Request: Seeking Facebook or Other "Real Name" Identity Policy Abuse Stories

Facebook and potentially other social media "real name" identity policies, such as discussed in:

continue to raise the specter of individuals being targeted and harmed as a result of these policies -- not just online but physically as well -- especially persons who are already vulnerable in various ways.

While we know of some specific examples where those so affected have been in a position to speak out about these abuses, it seems certain that vastly more individuals have been similarly negatively impacted, but have been understandably unable or unwilling to discuss their situations publicly.

I am attempting to gather as much information as possible about the scope of these issues as they have affected these individuals.

If you feel that you have personally been abused by Facebook or other Internet systems with "real name" identity requirements, I'd greatly appreciate your telling me as much about your situation as you feel comfortable doing. If you know of other persons so affected, please pass this request on to them if you feel that doing so would be appropriate.

Regardless of whether you identify yourself to me or not, the details of what you tell me will remain completely confidential unless you specifically indicate differently, and I will otherwise only use this data to develop aggregate statistics for summary public analysis and reports.

I would appreciate anything relevant to these issues that you can share with me via email at:

Thank you very much. Take care, all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:10 PM | Permalink

Why Facebook's Dangerous "Real Names" Policy Is Like the NRA and Guns

This posting isn't about the monsters of the NRA, their minions, and their blood-soaked hands. But it is about facing reality, not ignoring data, and about harm caused to real people, especially those who are already marginalized in our societies. As we'll see, in these respects there is a disquieting parallel between Facebook and the National Rifle Association, which once glimpsed can be very difficult to put out of one's mind.

I've talked many times before about the dilemmas associated with social media "real name" identity regimes, which attempt to require that users be identified via their actual "real world" names rather than by nicknames, pseudonyms, or in various anonymous or pseudo-anonymous forms.

At present, Facebook is the globe's primary enforcer of a social media "real names" ecosystem. And despite a mountain of evidence that this policy does immense harm to individuals, they have held steadfastly to this model. Google+ initially launched with a real names policy as well, but one of Google's strengths is realizing when something isn't working and then adjusting course as indicated -- and Google+ no longer requires real names.

Facebook's intransigence though is reminiscent of -- oh, for example -- being faced with overwhelming evidence that as gun availability increases gun violence increases, and then proposing even more guns as a solution to gun violence.

Facebook can claim that real names don't hurt people, and the NRA can claim that more guns are safer than less guns, but only sycophants will buy such bull in either case.

The original ostensible justifications for real names requirements have been pretty well shredded into tatters over the last few years.

It seemed pretty clear that Facebook has hoped all along to leverage a "one name, real identity" model into Facebook becoming a kind of universal identity hub, that users would broadly employ both online and in many cases offline as well. Facebook's founder and CEO Mark Zuckerberg famously said, "Having two identities for yourself is an example of a lack of integrity." This view is a necessary component of Facebook's ongoing hopes for real name monetization across the board.

Facebook's "universal identity" model thankfully hasn't really panned out for them so far, but they certainly moved to try push their real names methodology into other spheres nonetheless.

One obvious example is the Facebook commenting system, widely used on third-party sites and requiring users to login with their Facebook (real name) identities to post comments. A supposed rationale for this requirement was to reduce comment trolling and other comment abuse.

However, it quickly became clear that Facebook "real name" comments are a lose-lose proposition for everyone but Facebook.

There's no evidence that forcing people to post comments using their real identities reduces comment abuse at all. In fact, many trolls revel in the "honor" of their abusive trash being so identified.

Meanwhile, thoughtful users in sensitive situations have been unable to post what could have been useful and informative comments since Facebook's system insists on linking their work and personal postings to the same publicly viewable identity, making it problematic to comment negatively about an employer, or to admit that your child has HIV -- or that you live a frequently stigmatized lifestyle, for example. In some cases potentially life-threatening repercussions abound.

On top of all that, failures of these real name commenting systems give major third-party firms a convenient excuse to terminate existing comments completely across their sites, rather than making the effort to moderate comments effectively.

And much like the NRA's data-ignoring propaganda, the deeper you go with Facebook the more ludicrous everything gets.

Facebook's system for users to report other users for suspected "identity violations" would seem not particularly out of place in old East Germany under the Stasi - "Show us your papers!"

Users target other users with falsified account identity violation claims, causing accounts to be closed until the targeted, innocent users can jump through hoops to prove themselves "pure" again to Facebook's identity gods. Many such impacted users are emotionally wrecked by this kind of completely unnecessary and unjustifiable abuse.

There are other related issues as well. In a new public letter, a large consortium of public interest groups are asking Facebook to change or ideally end their real names policies, and have suggested that in some parts of the world such policies may actually be illegal.

Yet this really isn't all about Facebook, even though Facebook is unarguably the "800-pound gorilla" in the online identity room.

In a world where (for better or worse) our Internet access and content increasingly funnels through a relative handful of large firms, and governments around the world are rapidly embracing censorship, it's more important than ever that individuals not be stuffed into "one size fits all" identity regimes.

We must not permit online anonymity and pseudo-anonymity -- both crucial aspects of legitimate free speech -- to become effectively banned or criminalized.

Mistakes made in these policy realms today could significantly and perhaps permanently ruin key aspects of the Internet going forward, and these are matters that must be dealt with logically and based on data, not emotions.

To do otherwise is basically like playing Russian roulette with the potentially unlimited wonders of the Net itself. And while that might enrich the gun merchants who don't care whose brains the bullets splatter, for the rest of us it would be a very sad outcome indeed.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 10:17 AM | Permalink

October 07, 2015

Europe's Big, Big, Big Lie About Data Privacy

By now you may have heard about a European court's new decision against the so-called data "Safe Harbour" (over here we'd spell it "Safe Harbor") framework, involving where various Internet data for various users is physically stored.

You can easily search for the details that explain what this could effect, what it potentially means technically and legally, and generally why this dangerous decision is a matter of so much concern in so many quarters.

But here today I'm going to concentrate on what most of those articles you'll find won't be talking about -- what's actually, really, pushing the EU and various other countries around the world to demand that user data be kept in their own countries.

And you can bet your bottom dollar, euro, or ruble, it's not for the reasons they claim.

We have to begin with an unpleasant truth.

All countries spy. All of them. Every single one. No exceptions. They always have spied, they always will spy. Humans have been spying on each other since the caves.

And demands for "data localization" in reality have virtually nothing to do with privacy, and virtually everything to do with countries wanting to be sure that they can always spy on their own citizens and other residents.

Generally (but not always) intelligence and law enforcement services around the world draw some sort of (often muddy) line between domestic spying and spying on the activities of other countries. The rules and laws any given nation uses in-country can be different from their "beyond their borders" spying laws. In some countries, domestic spying is simply considered a normal police function, and in some nations the dividing line between law enforcement and intelligence agencies is nearly or completely nonexistent.

Even when regulations related to surveillance exist in an individual country, they are often officially ignored in many contexts, with nebulous "national security" concerns taking precedence.

Again it's important to emphasize: All countries spy. They spy to the maximal extent of their technical and financial abilities.

It has not been uncommon for nations to consider spying outside their borders to be a completely open game, not subject to any effective rules or limits. After all, those you're spying on out there aren't even your citizens!

But this is not to say that domestic spying isn't a major component of many countries' intelligence apparatus, and we're talking about entrenched domestic surveillance regimes in some countries outside the U.S. that make Edward Snowden's "revelations" about NSA look like a drop in the bucket.

Ironically, Snowden's new adopted home under the kindly influence of Czar Putin is one of the world's worst offenders in terms of domestic surveillance. China is another.

And coming up close behind is Europe.

The clues as to why Europe is now in this pitiful pantheon can be discerned clearly if you pay attention to what EU politicians and other EU officials have been saying publicly, even if we ignore the known revelations about their own spying activities.

Terrorism. It's on almost all their lips, almost all the time.

And this drives not only horrendous concepts like the EU (and now other countries) attempting to impose global censorship via "Right To Be Forgotten" (RTBF) regimes, but their demands for ever greater surveillance capabilities. Their rising tide of ostensible panic over strong encryption systems also plays into this same "rule by fear" mindset.

Which brings us back around to "safe harbour" and data localization.

The real reason you have countries demanding that the data of their citizens and other residents be stored in their own countries is to simplify access to that data by authorities in those countries, that is, for spying on their own people.

Notably, while U.S. authorities are indeed making a lot of noise trying to condemn strong encryption systems, you don't see serious calls for U.S. residents' data to be stored only on U.S. servers.

So what's the deal with the EU, and Russia, and various other countries about data localization? Clearly, having the servers in-country doesn't increase privacy -- it merely provides easier physical access to those servers and their associated networking infrastructures for law enforcement and intelligence operations.

True privacy protection isn't based on where data is located, but on the privacy policies and technologies of the firms maintaining that data, no matter where it physically resides.

So in many ways it's the EU/Russian politicos' worst data nightmare to have user data stored by companies like Google who won't just hand it over on any weak pretext, who are implementing ever stronger encryption systems, and who have incredibly strict rules and protections regarding access to user data -- and in particular regarding the legal processes required for access to that data by governments or other outside parties.

I'll note here once again that NSA or other U.S. intelligence agencies never had the ability to go rummaging around in Google servers as some of the early out-of-context clickbait claims of Snowden were inaccurately touted to imply. I've seen how Google handles these areas internally. I know many of the Googlers responsible for these systems and processes. If you don't want to believe Google or myself on this, that's your prerogative -- but you'd be wrong.

But in many other countries, law enforcement or intelligence services can get physical access to servers in those nations without any significant legal process at all -- just a nod and a wink, if that much.

That, dear friends, is what's actually going on. That's what exposes the big, big, big lie of data localization demands.

It's not about privacy. It's exactly the opposite. It's all about spying on your own people. It's about censorship. It's about control.

And like it or not, that is the sad and sordid truth.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 11:12 AM | Permalink

October 06, 2015

Google's New "Now on Tap" Brings Powerful Features -- and Interesting Privacy Issues

Google's new capability in Android M ("Marshmallow") includes a much anticipated capability called "Now on Tap" (NoT). I'm suddenly receiving a lot of privacy-related queries about it. Let's see if we can clear some of this up a bit.

Essentially, "NoT" permits the user to ask Google to provide more information related to the current screen you're looking at in an Android (even non-Google) app. It's similar in some respects to Google's "Now" cards that can (if enabled) provide more information based on your web browsing history, searches, and data explicitly shared with Google by apps -- if these functions are also enabled, of course.

NoT however takes another big step -- it actually can "read" what's on your screen being displayed by an Android app, and provide you with additional information based on that data.

Obviously to do this, data about what you're looking at needs to be sent to Google for analysis.

The way this is being done -- and this is very early discussion here based on the information I have at hand -- seems to be quite well thought out.

For example, you have to opt-in to NoT in the first place. Once you're in, data from your current screen is only sent to Google for NoT processing when you long-press the Home button on your device.

My current understanding is that both text and screenshots are sent for analysis by default if you have NoT enabled -- logical given today's display methodologies. In fact, buried in the settings you apparently can actually choose alternate providers of such services, and whether or not you wanted to send text-only or text plus screenshots.

So clearly a lot of deep thinking went into this. And make no mistake about it, NoT is very important to Google, since it provides them with a way to participate in the otherwise largely "walled garden" ecosystem of non-Google apps.

Still, there are some issues here that will be of especial importance to anyone who works with sensitive data, particularly if you're constrained by specific legal requirements (e.g., lawyers, HIPAA, etc.)

Some of this is similar to considerations when using Google's optional "data saver" functions for Android and Chrome, which route most of your non-SSL data through Google servers to provide data compression functionalities.

Fundamentally, there are some users in some professions who simply cannot risk -- from a legal standpoint if nothing else -- sending data to third parties accidentally or otherwise unexpectedly.

In the case, of NoT, for example, my understanding is that when a user long-presses home to send, the entire current view is sent to Google, which can include data that is not currently visible to the user (e.g. materials that you'd need to scroll down to view). This could, for example, include financial information, personal and business emails, and so on. And while NoT reportedly includes a function to allow app developers to prevent its use (in financial apps, let's say) this requires that the app developers actual compile in this function.

Another issue is that -- at the moment -- I am unclear as to all of the privacy policy aspects that apply to NoT -- how long is related data retained, what are all the ways it can be used now or perhaps later, etc. I don't expect problems in this area, but I am trying to get much more detailed information than I currently have seen in this context.

So overall, my quickie executive summary at this point suggests that Now on Tap will be a very useful feature for vast numbers of users -- but it won't be appropriate for everyone. Especially in corporate environments or other situations where sending data to third parties would be considered inappropriate irrespective of those third-parties' privacy policies, due consideration of these various capabilities and issues would be strongly recommended.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:36 PM | Permalink

September 30, 2015

How ISPs Will Royally Sucker the Internet, Thanks to Ad Blocking

Largely lost in the current controversies about users blocking ads from websites is a dirty little secret -- users are about to be played for suckers by the dominant ISPs around the world, and ad blocking will be the "camel's nose under the tent" that makes these ISPs' ultimate wet dreams of total control over Internet content come true at last.

There have been a number of clues already, with one particularly notable new one today.

The big red flashing warning light is the fact that in some cases it's possible for firms to buy their way past ad blockers -- proving demonstrably that what's really going on is that these ad blocking firms want a piece of the advertising pie -- while all the time they wax poetic propaganda about how much they hate -- simply hate! -- all those ads.

But these guys are just clowns compared to the big boys -- the dominant ISPs around the world.

And those ISPs have for so very long wanted their slices of that same pie. They want the money coming, going, in and out -- as SBC's CEO Edward Whitacre noted back in 2005 during their takeover of AT&T, when he famously asked "Why should [Internet sites] be allowed to use [my] pipes for free?" -- conveniently ignoring the fact that his subscribers were already paying him for Internet access to websites.

Now -- today -- ISPs sense that it's finally time to plunge their fangs into the Net's jugular, to really get the blood gushing out into deep scarlet pools of money.

Mobile operator Digicel announced today that they intend to block advertising (except for some local advertisers) on their networks across the South Pacific and Caribbean, unless -- you guessed it -- websites pay them to let their ads through.

And while their claimed targets are Google, Facebook, Yahoo, and the other major players, you know that it will never stop there, and ultimately millions of small businesses and other small websites -- many of them one person operations, often not even commercial -- who depend on those ads will be decimated.

Germany's Deutsche Telekom is known to have been "toying" with the same concept, and you can be sure that many other ISPs are as well. They're not interested in "protecting" users from ads -- they're all about control and extorting money from both sides of the game -- their subscribers and the sites those subscribers need to access.

Where this all likely leads is unfortunately very clear. No crystal ball required.

Some sites will block ISPs who try this game. Broad use of SSL will limit some of these ISPs' more rudimentary efforts to manipulate the data flows between sites and subscribers. Technology will advance quickly to move ads "inline" to content servers, making them much more difficult to effectively block.

But right now, firms such as Israeli startup Shine Technologies are moving aggressively to promote carrier level blocking systems to feed ISP greed.

Yet this isn't the worst of it. Because once ISPs have a taste of the control, power, and money - money - money that comes with micromanagement of their subscribers' Internet access and usage, the next step is obvious, especially in countries where strong net neutrality protections are not in place or are at risk of being repealed with the next administration.

Perhaps you remember a joke ad that was floating around some years ago, showing a purported price list for a future ISP -- with different prices depending on which Internet sites you wanted to access. Pay X dollars more a month to your ISP if you want to be permitted to reach Google. Pay Y dollars more a month for Facebook access. Another Z dollars a month for permission from your ISP to connect to Netflix. And so on.

It seemed pretty funny at the time.

It's not so funny now -- because it's the next logical step after ISP attempts at ad blocking. And in fact, blocking entire sites is technically usually far easier than trying to only block ads related to particular sites -- most users won't know about workarounds like proxies and VPNs, and the ISPs can try block those as well.

These are the kinds of nightmarish outcomes we can look forward to as a consequence of tampering with the Internet's original end-to-end model, especially at the ISP level.

It's a road to even more riches for the dominant ISPs, ever higher prices for their subscribers, and the ruin of vast numbers of websites, especially smaller ones with limited income sources.

It's the path to an Internet that closely resembles the vast wasteland that is cable TV today. And it's no coincidence that the dominant ISPs, frantic over fears of their control being subverted by so-called cable TV "cord cutters" moving to the Internet alone, now hope to remake the Internet itself in the image of cable TV's most hideous, anti-consumer attributes.

Nope, you don't need a Tarot deck or a Ouija board to see the future of the Internet these days, if the current patterns remain on their present course.

Whether or not our Internet actually remains on this grievous path, is of course ultimately in our hands.

But are we up to the challenge? Or are we suckers, after all?

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 03:47 PM | Permalink

September 28, 2015

Law Enforcement's Love/Hate Relationship with Cloud Auto Backup

There's a story going around today regarding an individual who was arrested and charged with assaulting a police officer when authorities arrived over a noise complaint. But cellphone video recorded by the arrestee convinced a judge that police had assaulted him, not the other way around. What's particularly unusual in this case is that the arrestee's cellphone had "mysteriously" vanished at the police station before any video was discovered.

So how was the exonerating video ultimately resurrected? Turns out it was saved up on Google servers via the phone's enabled auto backup system. So the phone's physical vanishing did not prevent the video from being saved to help prevent a serious miscarriage of justice.

Lawyers and law enforcement personnel around the world are probably considering this story carefully tonight, and they're likely to realize that such automatic backup capabilities may be double-edged swords.

On one hand, abusive cops can't depend on destroying evidence by making cellphones disappear or be "accidentally" crushed under a boot. Evidence favorable to the defendant might still be up on cloud servers, ready to reappear at any time.

But this also means that we may likely also expect to see increasing numbers of subpoenas triggered by law enforcement, lawyers, government agencies, and other interested parties, wanting to go on fishing expeditions through suspects' cloud accounts in the hopes of finding incriminating photographic or video evidence that might have been auto-backed up without the knowledge or realization of the suspects.

While few would argue that guilty suspects should go free, there is more at stake here.

The fact of such fishing expeditions being possible may dissuade many persons from enabling photo/video auto backup systems in the first place -- not because they plan to commit crimes, but just based on relatively vague privacy concerns. Even if the vast majority of honest persons would have no realistic chance of being targeted by the government for such a cloud search, an emotional factor is likely to be real for many innocent persons nonetheless.

And of course, if you've turned off auto backup due to such concerns, video or other data that might otherwise have been available to save the day at some point in the future, instead may not be available at all.

Adding to the complexities of this calculus is the fact that most uploaded videos or photos on these advanced systems are not subject to the kind of strong end-to-end encryption that has been the focus of ongoing controversies regarding proposed "back door" access to encrypted user data by authorities.

Obviously, for photos or videos to be processed in the typical manner by service providers, they will be stored in the clear -- not encrypted -- at various stages of the service ecosystem, at least temporarily.

What this all amounts to is that we're on the cusp of a brave new world when it comes to photos and videos automatically being protected in the cloud, and sometimes being unexpectedly available as a result.

The issues involved will be complicated both technically and legally, and we have only really begun to consider their ramifications, especially in relationship to escalating demands by authorities for access to user data of all kinds in many contexts.

Fasten your seatbelts.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 08:05 PM | Permalink

     Privacy Policy