October 22, 2015

YouTube Red to Creators: Join Us or Else?

The theme of queries in my inbox over the last 24 hours or so has definitely been related to Google's new "YouTube Red" offering.

In case you've been living in a cave without Internet service, Red is a new YouTube subscription tier aimed at providing ad-free videos and music. It's obviously a very important project to Google. There have been rumors about it for ages and it's been a long time coming.

There are a lot of questions and comments coming in, including from people questioning the idea of paying for what used to be free relating to the reportedly "exclusive content" aspect of Red, asking whether ad blockers render the entire concept of Red largely moot, and lots of other issues.

I'm not particularly concerned about the pricing right now, and as you probably already know I view ad blockers as essentially unethical (though I do agree that some Internet ad models have become incredibly obnoxious and intrusive -- a problem I prefer to see addressed from the ad creation and distribution side).

But the aspect of YouTube Red queries being sent to me that quickly caught my attention relates to existing monetizing YouTube creators -- YouTube Partners -- who feel that they were not adequately notified of this project and that they are being coerced into participating in Red.

I don't have all the facts yet and I'm trying to better understand the details.

The implication for now though seems to be that these loyal YouTube creators are being told by Google that if they are uninterested or unwilling to participate in the new Red program with its new terms, their existing YouTube videos will be changed to private status (and perhaps their entire YouTube channels as well) -- cutting them off from public viewing or participation.

A further implication appears to be that to proceed without participating in YouTube Red for whatever reasons, these creators would have to start from scratch. In other words, apparently -- and I'm trying confirm the accuracy of these claims I've received -- they cannot choose to take their channels public on a non-monetized or ordinary non-Partner monetized basis, and would have to start entirely new channels without any of their existing subscribers.

Loss of subscribers would be a very, very big deal for some of these creators who have spent years building up a following.

If this state of affairs is true, I do indeed find this aspect in particular to be quite disturbing.

Looking at it from what I presume is Google's point of view, YouTube wants to help ensure a reasonably uniform user experience, without confusion over why particular material would or would not appear with ads. I fully understand this.

On the other hand, if the situation actually does boil down to "agree to Red terms or you lose most of the work you've put into your YouTube channels up to now" -- well, that strikes me as fairly problematic both in an ethical sense and perhaps in a business sense as well, given the various competition to YouTube (especially from but not limited to Facebook) that appears to be rapidly developing.

So overall, this is my sense of the situation at the moment, based on what I know right now. As I noted above, I'm trying to get more details and find out how much of what I'm hearing about this is accurate, and of course I'll pass along what I find out.

As I've said many times, I'm a tremendous fan of YouTube. I consider it to be arguably the most important entertainment and educational video resource on the planet. I want it to continue succeeding.

But I really do hope that this can be done in a manner that is ethically fair to everyone concerned.

Be seeing you.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 09:31 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

October 20, 2015

When Facebook's "Real Name" Policies Can Kill

A week ago in:

Social Media Abuse Stories to Shrivel Your Soul

I discussed the wide range of serious problems resulting from social media "real names" identity requirements, with Facebook being the most prominent and important perpetrator of the resulting damage to users.

As I continue to receive associated stories in response to:

Research Request: Seeking Facebook or Other "Real Name" Identity Policy Abuse Stories

there's one particular category related to this area that is clearly among the most horrific. The same sorts of terrifying details are being related to me again and again by different persons, and it doesn't take a genius to see the patterns in play.

We begin with a truism: There are vast numbers of frequent Facebook users who actually hate using it, or at the very least are ambivalent. They use FB for one reason and one reason only -- they've found it to be the only practical way to stay in touch with their peer groups or families who have become dependent on the FB model, and they face being effectively cut off from communications with them (or at least being accused of not caring about them) if they don't follow through with the FB grind almost every day. They feel stuck and trapped. Essentially, they are FB users under duress.

Now we combine this sad fact with another fact. Facebook's "real name" policy isn't actually a real name policy at all. It's a "names Facebook feels look pretty much real according to Facebook standards" policy.

Obviously most persons would be unwilling to hand over their driver's license, Social Security Number, and/or credit card information to establish a FB account. And absent that kind of verification, FB usually has no clue whether the "real looking name" you signed up with is actually your name or not.

A farce? Yep, we could definitely call it that.

And it's a very dangerous farce, indeed.

In fact, essentially the only time that Facebook demands actual proof of identity documents is when the name you've chosen to use on your account either doesn't look like what Facebook considers to be a real name -- or when the name you chose (that typically does appear real) is reported by some other user as potentially a pseudonym in violation of FB rules.

It's this latter case that terrifies many innocent users, that has them living in fear of exposure every day, that gives their adversaries tremendous power over them, and that could actually result in people being injured or killed.

Because one of the most frequent reasons for choosing pseudonyms on Facebook is the completely valid concerns of already vulnerable and victimized persons who feel that they must continue to use FB to stay in contact with friends, families, or others, but for whom exposure of their real names could have devastating real-world consequences.

Estranged spouses, LGBT discrimination and other harassment victims, targets of sexual attacks, the prey of bullies -- the list goes on and on.

I've received reports of such vulnerable individuals being extorted by others, who have threatened to report their accounts to Facebook unless demands were met.

But irrespective of how or why such a person's profile is reported to Facebook's identity squad, the results are virtually always awful.

The targeted individuals are faced with an ultimate sort of "Hobson's choice" -- either be exposed on Facebook if they use their actual name that will subject them to further online and in many cases offline attacks -- or stop using Facebook entirely, cutting themselves off from their support structures and other people they care about. In theory they could sometimes try to create a new pseudonym -- with all the hassles involved with reestablishing contacts and online relationships -- but face the likely prospect of being right back in the same quagmire again in short order.

In practice, this is no choice at all for most persons in this position. They've been terrorized, and Facebook's policies not only set the stage for this abuse, but actively make it worse. Far worse.

As I've noted previously, law enforcement's usual response to these victims of intertwined online/offline violence is the epitome of callousness, generally recommending that victims simply stop using the Internet. A most ignorant and dangerous response.

In an ethical sense at least, it doesn't matter one damned iota what high percentage of Facebook and other social media users don't suffer from these sorts of abuses.

It's our jobs as the designers and maintainers of these systems to ensure to the maximal extent possible that they not become tools for the oppression and destruction of innocent, vulnerable persons.

We can either do this proactively and voluntarily, or wait for pandering politicians to make matters even worse by using these situations as an another excuse to push their own damaging censorship regimes.

If we can't get this right, we will have no valid defenses at all to charges of callousness, hypocrisy, and worse.

And we'll have nobody to blame but ourselves.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 09:45 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

October 13, 2015

Social Media Abuse Stories to Shrivel Your Soul

Recently in "Research Request: Seeking Facebook or Other 'Real Name' Identity Policy Abuse Stories" -- https://lauren.vortex.com/archive/001131.html -- I requested that readers send me examples of social media abuses that have targeted themselves or persons they know, with an emphasis on "identity" issues such as those triggered by Facebook's "real name" policies.

These are continuing to pour in -- and please keep sending them -- but I wanted to provide a quick interim report.

Executive summary: Awful. Sickening. I knew some of these would be bad, but many are far worse than I had anticipated anyone being willing to send me. It seems very likely -- though obviously I couldn't swear to this under oath -- that these abuses have resulted in both suicides and homicides.

And if we as an industry don't get a handle on these issues, we ultimately risk draconian government crackdowns that will simply enable more government censorship and create even more problems.

Here are some of the more obvious observations I can derive from the messages I'm being sent (not in any particular order for now):

There is no longer any realistic dividing line between the online and offline worlds. Abuse taking place online can quickly spill offline, affecting targeted persons' physical lives directly and devastatingly.

Most forms of social media abuse are interconnected. That is, we cannot realistically demarcate between "identity policy" abuses (e.g., Facebook's "real name" requirements), and other forms of social media abuse (such as comment trolling, Gamergate, and far more).

Women are disproportionately targeted by social media abuse (as a male I find this fact to be personally offensive), but yes, many men are also attacked as well.

A lack of realistically useful and advanced moderation and abuse report/flagging tools, and/or insufficient surfacing of these tools to users, combined with "lackadaisical" (that's the most polite term I can use) attention to these reports in many cases, exacerbates existing problems.

Social media systems with strict "real name" requirements are especially problematic and can be extremely dangerous. This particularly relates to the 800-pound gorilla of Facebook in this context (Google+ wisely dropped its real name requirements quite a ways back).

Facebook's identity "real name" policies have been effectively "weaponized" by abusers. Many FB users who are already targeted and marginalized in their offline lives (domestic violence victims, LGBT, racial and religious minorities, and so many more) still need to use FB to stay in contact, but (in an attempt to protect themselves) are using "real appearing" pseudonyms instead of their real names. If one of their protagonists discovers their FB identity, it is not uncommon for the abuser to report the victim to FB (for example, as a twisted form of "revenge") in an attempt to expose them online and offline, and to destroy their ability to be safely online.

Social media firm reactions to flagging and abuse complaints -- particularly in the case of Facebook -- can be erratic and seemingly arbitrary. Complaints that in one instance might target an innocent person might cause an account suspension, but one targeting a guilty person may be ignored. Innocent parties may be required by FB to jump through a series of humiliating and embarrassing hoops to try regain access, including persons whose protective pseudonyms have been exposed and persons whose actual, real names have been falsely flagged as fakes. In some cases, Facebook actually suggests to affected users that they go to court and change their name legally to match FB's rules!

Governments in general (which tend to see censorship as a solution rather than the problem it actually is) and law enforcement in particular, usually make these matters worse, not better. The police tend to be clueless at best, and often explicitly "stop wasting our time" antagonistic. Victims of bullying and online threats to their offline lives who go to the police are usually informed that there's nothing to be done to help them, or victims are told to just "stop using the Internet" as a proposed (inane) solution.

We could go on with this list, but I'm sure you get the idea.

I'm forced to add that not all of the reaction to my research request on these topics has been positive. I've received some responses that attempt to minimize the entire controversy. They've told me I'm wasting my time. They've suggested that in a relative sense "so few" people are actually victimized by these problems (compared with the billions using these system) that it would be ridiculous for the companies involved to make significant changes just to cater to to a small group of actual victims and a much larger group of supposed malcontents.

I can't emphasize how forcefully I categorically reject that entire line of reasoning.

The inherent suggestions that because "relatively" few persons might be affected (and that still means vast numbers of warm bodies at these scales) could somehow excuse the abysmal status quo -- are entirely and completely unacceptable, untenable, and unethical.

It's true that we can't put precise numbers on the victims. After all, most of these vulnerable persons are already trying to protect themselves from exposure, being forced into essentially a "shadow" universe of social media identities. And we'd expect that most would also be understandably unwilling to discuss their situations with a stranger such as myself.

But many have been so willing, and I thank them for their trust. And I believe we can safely extrapolate to the reality that there are one hell of a lot of people being victimized by these issues.

And in fact, the numbers shouldn't really matter at all. How many deaths or lives otherwise ruined attributable at least significantly to social media abuses are tolerable? I would assert that the answer in an ethical sense at least is zero.

Does this mean we can quickly solve all these problems? Is there a magic wand?

Of course not. But that doesn't mean we shouldn't try. And remember, once politicians get their claws into these controversies, you can bet that the kinds of "solutions" they push will aim to further their agendas more than anything else.

These are problems we must ourselves work toward eliminating.

Obviously, education outreach must be a major part of this effort, especially to law enforcement and other government agencies.

But we also need to have a much better handle on these situations as an industry, because the problems are ultimately not isolated to single firms.

There need to be individuals and teams within the involved firms who not only are working internally on these issues, but who also participate broadly in related public communications efforts. These companies need to work together toward understanding the impacts of their ecosystems in these contexts -- a formal or informal industry consortium to specifically further such interactions would seem a useful concept for consideration.

Most of all, it's crucial that we as individuals -- not just those of us who have built and used the Internet for many years, but also users who have so far only barely gotten their feet wet on the Web -- recognize that it is intolerable for the Net to be turned into a tool for the destruction of lives, and that it's up to us to pave the path toward changes that will truly help the Net to flourish for the good of our societies, rather than allowing the Net (and ourselves) to be shackled by politically shortsighted restrictions.

Take care, all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 10:08 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

October 08, 2015

Research Request: Seeking Facebook or Other "Real Name" Identity Policy Abuse Stories

Facebook and potentially other social media "real name" identity policies, such as discussed in:


continue to raise the specter of individuals being targeted and harmed as a result of these policies -- not just online but physically as well -- especially persons who are already vulnerable in various ways.

While we know of some specific examples where those so affected have been in a position to speak out about these abuses, it seems certain that vastly more individuals have been similarly negatively impacted, but have been understandably unable or unwilling to discuss their situations publicly.

I am attempting to gather as much information as possible about the scope of these issues as they have affected these individuals.

If you feel that you have personally been abused by Facebook or other Internet systems with "real name" identity requirements, I'd greatly appreciate your telling me as much about your situation as you feel comfortable doing. If you know of other persons so affected, please pass this request on to them if you feel that doing so would be appropriate.

Regardless of whether you identify yourself to me or not, the details of what you tell me will remain completely confidential unless you specifically indicate differently, and I will otherwise only use this data to develop aggregate statistics for summary public analysis and reports.

I would appreciate anything relevant to these issues that you can share with me via email at:


Thank you very much. Take care, all.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:10 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

Why Facebook's Dangerous "Real Names" Policy Is Like the NRA and Guns

This posting isn't about the monsters of the NRA, their minions, and their blood-soaked hands. But it is about facing reality, not ignoring data, and about harm caused to real people, especially those who are already marginalized in our societies. As we'll see, in these respects there is a disquieting parallel between Facebook and the National Rifle Association, which once glimpsed can be very difficult to put out of one's mind.

I've talked many times before about the dilemmas associated with social media "real name" identity regimes, which attempt to require that users be identified via their actual "real world" names rather than by nicknames, pseudonyms, or in various anonymous or pseudo-anonymous forms.

At present, Facebook is the globe's primary enforcer of a social media "real names" ecosystem. And despite a mountain of evidence that this policy does immense harm to individuals, they have held steadfastly to this model. Google+ initially launched with a real names policy as well, but one of Google's strengths is realizing when something isn't working and then adjusting course as indicated -- and Google+ no longer requires real names.

Facebook's intransigence though is reminiscent of -- oh, for example -- being faced with overwhelming evidence that as gun availability increases gun violence increases, and then proposing even more guns as a solution to gun violence.

Facebook can claim that real names don't hurt people, and the NRA can claim that more guns are safer than less guns, but only sycophants will buy such bull in either case.

The original ostensible justifications for real names requirements have been pretty well shredded into tatters over the last few years.

It seemed pretty clear that Facebook has hoped all along to leverage a "one name, real identity" model into Facebook becoming a kind of universal identity hub, that users would broadly employ both online and in many cases offline as well. Facebook's founder and CEO Mark Zuckerberg famously said, "Having two identities for yourself is an example of a lack of integrity." This view is a necessary component of Facebook's ongoing hopes for real name monetization across the board.

Facebook's "universal identity" model thankfully hasn't really panned out for them so far, but they certainly moved to try push their real names methodology into other spheres nonetheless.

One obvious example is the Facebook commenting system, widely used on third-party sites and requiring users to login with their Facebook (real name) identities to post comments. A supposed rationale for this requirement was to reduce comment trolling and other comment abuse.

However, it quickly became clear that Facebook "real name" comments are a lose-lose proposition for everyone but Facebook.

There's no evidence that forcing people to post comments using their real identities reduces comment abuse at all. In fact, many trolls revel in the "honor" of their abusive trash being so identified.

Meanwhile, thoughtful users in sensitive situations have been unable to post what could have been useful and informative comments since Facebook's system insists on linking their work and personal postings to the same publicly viewable identity, making it problematic to comment negatively about an employer, or to admit that your child has HIV -- or that you live a frequently stigmatized lifestyle, for example. In some cases potentially life-threatening repercussions abound.

On top of all that, failures of these real name commenting systems give major third-party firms a convenient excuse to terminate existing comments completely across their sites, rather than making the effort to moderate comments effectively.

And much like the NRA's data-ignoring propaganda, the deeper you go with Facebook the more ludicrous everything gets.

Facebook's system for users to report other users for suspected "identity violations" would seem not particularly out of place in old East Germany under the Stasi - "Show us your papers!"

Users target other users with falsified account identity violation claims, causing accounts to be closed until the targeted, innocent users can jump through hoops to prove themselves "pure" again to Facebook's identity gods. Many such impacted users are emotionally wrecked by this kind of completely unnecessary and unjustifiable abuse.

There are other related issues as well. In a new public letter, a large consortium of public interest groups are asking Facebook to change or ideally end their real names policies, and have suggested that in some parts of the world such policies may actually be illegal.

Yet this really isn't all about Facebook, even though Facebook is unarguably the "800-pound gorilla" in the online identity room.

In a world where (for better or worse) our Internet access and content increasingly funnels through a relative handful of large firms, and governments around the world are rapidly embracing censorship, it's more important than ever that individuals not be stuffed into "one size fits all" identity regimes.

We must not permit online anonymity and pseudo-anonymity -- both crucial aspects of legitimate free speech -- to become effectively banned or criminalized.

Mistakes made in these policy realms today could significantly and perhaps permanently ruin key aspects of the Internet going forward, and these are matters that must be dealt with logically and based on data, not emotions.

To do otherwise is basically like playing Russian roulette with the potentially unlimited wonders of the Net itself. And while that might enrich the gun merchants who don't care whose brains the bullets splatter, for the rest of us it would be a very sad outcome indeed.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 10:17 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

October 07, 2015

Europe's Big, Big, Big Lie About Data Privacy

By now you may have heard about a European court's new decision against the so-called data "Safe Harbour" (over here we'd spell it "Safe Harbor") framework, involving where various Internet data for various users is physically stored.

You can easily search for the details that explain what this could effect, what it potentially means technically and legally, and generally why this dangerous decision is a matter of so much concern in so many quarters.

But here today I'm going to concentrate on what most of those articles you'll find won't be talking about -- what's actually, really, pushing the EU and various other countries around the world to demand that user data be kept in their own countries.

And you can bet your bottom dollar, euro, or ruble, it's not for the reasons they claim.

We have to begin with an unpleasant truth.

All countries spy. All of them. Every single one. No exceptions. They always have spied, they always will spy. Humans have been spying on each other since the caves.

And demands for "data localization" in reality have virtually nothing to do with privacy, and virtually everything to do with countries wanting to be sure that they can always spy on their own citizens and other residents.

Generally (but not always) intelligence and law enforcement services around the world draw some sort of (often muddy) line between domestic spying and spying on the activities of other countries. The rules and laws any given nation uses in-country can be different from their "beyond their borders" spying laws. In some countries, domestic spying is simply considered a normal police function, and in some nations the dividing line between law enforcement and intelligence agencies is nearly or completely nonexistent.

Even when regulations related to surveillance exist in an individual country, they are often officially ignored in many contexts, with nebulous "national security" concerns taking precedence.

Again it's important to emphasize: All countries spy. They spy to the maximal extent of their technical and financial abilities.

It has not been uncommon for nations to consider spying outside their borders to be a completely open game, not subject to any effective rules or limits. After all, those you're spying on out there aren't even your citizens!

But this is not to say that domestic spying isn't a major component of many countries' intelligence apparatus, and we're talking about entrenched domestic surveillance regimes in some countries outside the U.S. that make Edward Snowden's "revelations" about NSA look like a drop in the bucket.

Ironically, Snowden's new adopted home under the kindly influence of Czar Putin is one of the world's worst offenders in terms of domestic surveillance. China is another.

And coming up close behind is Europe.

The clues as to why Europe is now in this pitiful pantheon can be discerned clearly if you pay attention to what EU politicians and other EU officials have been saying publicly, even if we ignore the known revelations about their own spying activities.

Terrorism. It's on almost all their lips, almost all the time.

And this drives not only horrendous concepts like the EU (and now other countries) attempting to impose global censorship via "Right To Be Forgotten" (RTBF) regimes, but their demands for ever greater surveillance capabilities. Their rising tide of ostensible panic over strong encryption systems also plays into this same "rule by fear" mindset.

Which brings us back around to "safe harbour" and data localization.

The real reason you have countries demanding that the data of their citizens and other residents be stored in their own countries is to simplify access to that data by authorities in those countries, that is, for spying on their own people.

Notably, while U.S. authorities are indeed making a lot of noise trying to condemn strong encryption systems, you don't see serious calls for U.S. residents' data to be stored only on U.S. servers.

So what's the deal with the EU, and Russia, and various other countries about data localization? Clearly, having the servers in-country doesn't increase privacy -- it merely provides easier physical access to those servers and their associated networking infrastructures for law enforcement and intelligence operations.

True privacy protection isn't based on where data is located, but on the privacy policies and technologies of the firms maintaining that data, no matter where it physically resides.

So in many ways it's the EU/Russian politicos' worst data nightmare to have user data stored by companies like Google who won't just hand it over on any weak pretext, who are implementing ever stronger encryption systems, and who have incredibly strict rules and protections regarding access to user data -- and in particular regarding the legal processes required for access to that data by governments or other outside parties.

I'll note here once again that NSA or other U.S. intelligence agencies never had the ability to go rummaging around in Google servers as some of the early out-of-context clickbait claims of Snowden were inaccurately touted to imply. I've seen how Google handles these areas internally. I know many of the Googlers responsible for these systems and processes. If you don't want to believe Google or myself on this, that's your prerogative -- but you'd be wrong.

But in many other countries, law enforcement or intelligence services can get physical access to servers in those nations without any significant legal process at all -- just a nod and a wink, if that much.

That, dear friends, is what's actually going on. That's what exposes the big, big, big lie of data localization demands.

It's not about privacy. It's exactly the opposite. It's all about spying on your own people. It's about censorship. It's about control.

And like it or not, that is the sad and sordid truth.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 11:12 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

October 06, 2015

Google's New "Now on Tap" Brings Powerful Features -- and Interesting Privacy Issues

Google's new capability in Android M ("Marshmallow") includes a much anticipated capability called "Now on Tap" (NoT). I'm suddenly receiving a lot of privacy-related queries about it. Let's see if we can clear some of this up a bit.

Essentially, "NoT" permits the user to ask Google to provide more information related to the current screen you're looking at in an Android (even non-Google) app. It's similar in some respects to Google's "Now" cards that can (if enabled) provide more information based on your web browsing history, searches, and data explicitly shared with Google by apps -- if these functions are also enabled, of course.

NoT however takes another big step -- it actually can "read" what's on your screen being displayed by an Android app, and provide you with additional information based on that data.

Obviously to do this, data about what you're looking at needs to be sent to Google for analysis.

The way this is being done -- and this is very early discussion here based on the information I have at hand -- seems to be quite well thought out.

For example, you have to opt-in to NoT in the first place. Once you're in, data from your current screen is only sent to Google for NoT processing when you long-press the Home button on your device.

My current understanding is that both text and screenshots are sent for analysis by default if you have NoT enabled -- logical given today's display methodologies. In fact, buried in the settings you apparently can actually choose alternate providers of such services, and whether or not you wanted to send text-only or text plus screenshots.

So clearly a lot of deep thinking went into this. And make no mistake about it, NoT is very important to Google, since it provides them with a way to participate in the otherwise largely "walled garden" ecosystem of non-Google apps.

Still, there are some issues here that will be of especial importance to anyone who works with sensitive data, particularly if you're constrained by specific legal requirements (e.g., lawyers, HIPAA, etc.)

Some of this is similar to considerations when using Google's optional "data saver" functions for Android and Chrome, which route most of your non-SSL data through Google servers to provide data compression functionalities.

Fundamentally, there are some users in some professions who simply cannot risk -- from a legal standpoint if nothing else -- sending data to third parties accidentally or otherwise unexpectedly.

In the case, of NoT, for example, my understanding is that when a user long-presses home to send, the entire current view is sent to Google, which can include data that is not currently visible to the user (e.g. materials that you'd need to scroll down to view). This could, for example, include financial information, personal and business emails, and so on. And while NoT reportedly includes a function to allow app developers to prevent its use (in financial apps, let's say) this requires that the app developers actual compile in this function.

Another issue is that -- at the moment -- I am unclear as to all of the privacy policy aspects that apply to NoT -- how long is related data retained, what are all the ways it can be used now or perhaps later, etc. I don't expect problems in this area, but I am trying to get much more detailed information than I currently have seen in this context.

So overall, my quickie executive summary at this point suggests that Now on Tap will be a very useful feature for vast numbers of users -- but it won't be appropriate for everyone. Especially in corporate environments or other situations where sending data to third parties would be considered inappropriate irrespective of those third-parties' privacy policies, due consideration of these various capabilities and issues would be strongly recommended.

I have consulted to Google, but I am not currently doing so.
All opinions expressed here are mine alone.

Posted by Lauren at 04:36 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein