February 11, 2016

Does Google Hate Old People?

No. Google doesn't hate old people. I know Google well enough to be pretty damned sure about that.

Is Google "indifferent" to old people? Does Google simply not appreciate, or somehow devalue, the needs of older users?

Those are much tougher calls.

I've written a lot in the past about accessibility and user interfaces. And today I'm feeling pretty frustrated about these topics. So if some sort of noxious green fluid starts to bubble out from your screen, I apologize in advance.

What is old, anyway? Or we can use the currently more popular term "elderly" if you prefer -- six of one and half a dozen of another, really.

There are a bunch of references to "not wanting to get old" in the lyrics of famous rock stars who are now themselves of rather advanced ages. And we hear all the time that "50 is the new 30" or "70 is the new 50" or ... whatever.

The bottom line is that we either age or die.

And the popular view of "elderly" people sitting around staring at the walls -- and so rather easily ignored -- is increasingly a false one. More and more we find active users of computers and Internet services well into their 80s and 90s. In email and social media, many of them are clearly far more intelligent and coherent than large swaths of users a third their age.

That's not to say these older users don't have issues to deal with that younger persons don't. Vision and motor skill problems are common. So is the specter of memory loss (that actually begins by the time we reach age 20, then increases from that point onward for most of us).

Yet an irony is that computers and Internet services can serve as aids in all these areas. I've written in the past of mobile phones being saviors as we age, for example by providing an instantly available form of extended memory.

But we also are forced to acknowledge that most Internet services still only serve older persons' needs seemingly begrudgingly, failing to fully comprehend how changing demographics are pushing an ever larger proportion of their total users into that category -- both here in the U.S. and in many other countries.

So it's painful to see Google dropping the ball in some of these areas (and to be clear, while I have the most experience with the Google aspects of these problems, these are actually industry-wide issues, by no means restricted to Google).

This is difficult to put succinctly. Over time these concerns have intertwined and combined in ways increasingly cumbersome to tease apart with precision. But if you've every tried to provide computer/Internet technical support to an older friend or relative, you'll probably recognize this picture pretty quickly.

I'm no spring chicken myself. But I remotely provide tech support to a number of persons significantly older -- some in their 80s, and more than one well into their 90s.

And while I bitch about poor font contrast and wasted screen real estate, the technical problems of those older users are typically of a far more complex nature.

They have even more trouble with those fonts. They have motor skill issues making the use of common user interfaces difficult or in some cases impossible. Desktop interfaces that seem to be an afterthought of popular "mobile first" interface designs can be especially cumbersome for them. They can forget their passwords and be unable to follow recovery procedures successfully, often creating enormous frustration and even more complications when they try to solve the problems by themselves. The level of technical lingo thrown at them in many such instances -- that services seem to assume everyone just knows -- only frustrates them more. And so on.

But access to the Net is absolutely crucial for so many of these older users. It's not just accessing financial and utility sites that pretty much everyone now depends upon, it's staying active and in touch with friends and relatives and others, especially if they're not physically nearby and their own mobility is limited.

Keeping that connectivity going for these users can involve a number of compromises that we can all agree are not keeping with ideal or "pure" security practices, but are realistic necessities in some cases nonetheless.

So it's often a fact of life that elderly users will use their "trusted support" person as the custodian of their recovery and two-factor addresses, and of their primary login credentials as well.

And to those readers who scream, "No! You must never, ever share your login credentials with anyone!" -- I wish you luck supporting a 93-year-old user across the country without those credentials. Perhaps you're a god with such skills. I'm not.

Because I've written about this kind of stuff so frequently, you may by now be suspecting that a particular incident has fired me off today.

You'd be correct. I've been arguing publicly with a Google program manager and some others on a Chrome bug thread, regarding the lack of persistent connection capability for Chromebooks and Chromeboxes in the otherwise excellent Chrome Remote Desktop system -- a feature that the Windows version of CRD has long possessed.

Painfully, from my perspective the conversation has rapidly degenerated into my arguing against the notion that "it's better to flush some users down the toilet than violate principles of security purity."

I prefer to assume that the arrogance suggested by the "security purity" view is one based on ignorance and lack of experience with users in need, rather than any inherent hatred of the elderly.

In fact, getting back to the title of this posting, I'm sure hatred isn't in play.

But of course whether it's hatred or ignorance -- or something else entirely -- doesn't help these users.

The Chrome OS situation is particularly ironic for me, since these are older users whom I specifically urged to move to Chrome when their Windows systems were failing, while assuring them that Chrome would be a more convenient and stable experience for them.

Unfortunately, these apparently intentional limitations in the Chrome version of CRD -- vis-a-vis the Windows version -- have been a source of unending frustration for these users, as they often struggle to find, enable, and execute the Chrome version manually every time they need help from me, and then are understandably upset that they have to sit there and refresh the connection manually every 10 minutes to keep it going. They keep asking me why I told them to leave Windows and why I can't fix these access problems that are so confusing to them. It's personally embarrassing to me.

Here's arguably the saddest part of all. If I were the average user who didn't have a clue of how Google's internal culture works and of what great people Googlers are, it would be easy to just mumble something like, "What do you expect? All those big companies are the same, they just don't care."

But that isn't the Google I know, and so it's even more frustrating to me to see these unnecessary problems continuing to persist and fester in the Google ecosystem, when I know for a certainty that Google has the capability and resources to do so much better in these areas.

And that's the truth.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 12:18 PM | Permalink

February 10, 2016

Call for Participation: Internet Political Trolls Collection Project 2016

It's no secret that vile political trolls remain massively at large in the social media landscape during this USA 2016 presidential election season.

But who are they? Who are their targets? Who do they support? What are the specific aspects of their attacks in social media comments and their other postings?

I've begun a survey to collect some detailed data on these and related questions from anyone who has themselves observed politically-oriented trolls on social media. It should take only a few minutes to complete, and you can return as often as desired to report additional trolls.

Your participation (and that of anyone else with whom you feel comfortable sharing this survey for their participation) will be greatly appreciated.

I consider individual survey responses to be private and will only publicly report aggregate and summary data from this effort.

The survey -- "Internet Political Trolls Collection Project (USA 2016)" -- is at:


Thanks very much!

Be seeing you.

And please don't feed the trolls!


Posted by Lauren at 01:06 PM | Permalink

January 29, 2016

Why Does Twitter Refuse to Shut Down Donald Trump?

A reporter asked me a provocative question some days ago: "Why do you think Twitter hasn't enforced their own Terms of Service rules when it comes to Donald Trump?"

I didn't have an immediate answer. I told him I'd look into this, think about it, and get back to him.

So I've been researching this in considerable depth.

I found that any reasonable analysis of the situation suggests that Trump should have been closed down on Twitter long ago.

To be sure, I don't regularly follow Trump on Twitter, just as I don't frequent websites devoted to close-up photos of diarrhea.

Certainly I do hear about some of his tweets from time to time, when they leak onto other social media or when the press tries to get more clicks from displaying them on cable news and such (ah, perhaps our first clue to the mystery!).

Exploring an archive of even his relatively recent Twitter activity -- which instantly reminded me somehow of a vile bully named Sheldon I knew back in elementary school -- it was startling what a hateful, deceitful spew of apparent lies and direct attacks that Trump has been leveraging Twitter to deliver -- with his enormous following on Twitter, presumably to Twitter's financial benefit as well (another clue!).

It's quite a Twitter stream Trump has going there -- if you're into gawking at gruesome highway wrecks, that is. Onslaughts against individuals. Similar attacks against organizations, even against entire races. White supremacist propaganda. On and on and on. Try retrospectively reading Donald's tweets without feeling the need to vomit -- virtually impossible if you're a socialized human being and not someone raised by hyenas.

Yet as long as a Tweet isn't actually illegal (irrespective of Trump's creepy, sexualized comments about his own daughter) Twitter is not actually obligated to take any action against anyone.

But Twitter is certainly obligated to apply the rules that they do have in an evenhanded manner. And looking back over the collection I have of complaints from Twitter users who feel Twitter terminated their accounts inappropriately -- even for a single comment that was interpreted to be disrespectful in some way -- it would appear that Twitter is coddling Trump in a unique manner indeed.

A reading of the Twitter content Terms of Service suggests at least three categories relating to hate speech and harassment that should apply to Trump (but apparently haven't been applied), but seem to have been rigorously enforced against other, ordinary users on a hair-trigger basis.

Are there special exceptions in the Twitter ToS for obnoxious billionaires running for the presidency? Or for tweets where the individuals, organizations, or others targeted by those tweets did not formally complain to Twitter?

No matter how deeply you study those Terms of Service, you won't find such exceptions.

But wait! Perhaps there's an exception if you're only retweeting other users' material? After all, Trump's most popular excuse for his most offensive tweets seems to be that he was "only retweeting someone else."

Nope, I can't find an exception for that, either. You retweet someone else's tweet, you own that content just as if it was your tweet originally.

The conclusion appears inescapable. Twitter apparently has voluntarily chosen to "look the other way" while Donald Trump spews forth a trolling stream of hate and other abuses that would cause any average Twitter user to be terminated in a heartbeat.

There's always room to argue the propriety or desirability of any given social media content terms of service -- or the policy precepts through which they are applied.

It is also utterly clear that if such rules are not applied to everyone with the same vigor, particularly when there's an appearance of profiting by making exceptions for particular individuals, the moral authority on which those rules are presumably based is decimated, pointless, and becomes a mere fiction.

In other words, we thought that Twitter was far more ethical than Donald Trump.

Apparently, that assumption is in error.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 09:54 AM | Permalink

January 24, 2016

Why I'm a Defender of YouTube

There's stuff about today's Internet that I love, and there's stuff about today's Internet that I hate. This seems entirely fair and just and proper, given that I have much the same feelings about the world at large and humanity in particular.

One of the aspects of the Internet that I love is Google's YouTube.

It is, in many ways, at the center of the Internet universe and of our Internet lives for many of us, despite advancing efforts of its various competitors to (in some cases literally and illicitly) steal its thunder.

Personally, I don't much care about the latest dance video to hit a billion YT views. And I've never monetized any of my own videos on YouTube, so my videos up there have never made me a dime, not even one that recently passed a million views itself.

That's all fine with me.

It's also fine with me that other folks do monetize, and do care about those billion dance video views -- because all of that is what helps pay the bills that keep those Google data centers humming along and spewing forth those godzillabytes of YouTube video streams.

I mostly watch YouTube for current "issues of importance" items, occasional current entertainment, educational and often money-saving "how-to" videos of all sorts, and gobs of archival searches.

In that last category tend to be not only the nostalgic clips from yesteryear -- often that I haven't seen in decades, but frequently incredible serendipity from YouTube's astronomical corpus of uploaders and their videos. In fact, this posting today was partly inspired by my stumbling upon wonderful 1974 videos this morning that I never knew existed, showing the comedian Marty Feldman performing classic Tom Lehrer songs. Praise be to YouTube!

There is also a seeming dark side to YouTube -- but in fact it is not actually of YouTube at all, but rather a reflection of the world's own Yin and Yang, for the astronomical quantities of video being uploaded to YouTube 24/7/365 represent but a reflection of humanity in all its wonderful glory and hideous evil. YouTube itself is no more responsible for the existence of planet Earth's problems than a mirror hanging on the wall is responsible for the images reflecting from it.

It is also my belief that most attempts to force Google to censor YouTube tend to be misguided in the extreme. For example, YouTube already prohibits explicitly violent videos that could reasonably be interpreted to incite attacks on people, animals, or property -- all addressed via YouTube's existing Terms of Service. Efforts to bury and hide all evil from public view will inevitably result in blowback that can ultimately be even more damaging to society.

There are indeed practical and reasonable limits to specific speech in certain extreme and specific situations, but using fears of terrorism as an excuse to try impose broad restrictions on free speech aren't effective nor appropriate ways to fight terrorism -- they are in fact ceding power to the very terrorist philosophies that we wish to eradicate.

Google is between a rock and a hard place when it comes to YouTube in particular. Governments around the world want to control YT for their own ends, often to the political or personal financial advantage of their own nations' leaders and other politicians.

And YouTube is lodged between the devil and the deep blue sea in terms of other dilemmas as well.

A large percentage of the Google-related queries I receive virtually every day involve YouTube. Often these are concerns from users who are complaining about problems that their uploaded videos are having with YouTube's Content ID system, or with copyright strikes, or with other related YT Terms of Use matters where they're unhappy with associated responses they've received from Google regarding their concerns.

I try to help to the limited extent that I'm able. At the least I can often offer free advice. And I often feel for them, too. I've had my own YT videos that suffered errant Content ID hits. I once saw my entire main YouTube account closed (and later restored in full, after some considerable effort on my part) due to mistaken copyright strikes.

Frequently in these queries is expressed the belief that Google is attacking them -- these individual YouTube users -- out of evil, or spite, or just because they can. There is not infrequently an implicit (or explicit) assumption that YouTube favors the "big guys" over the "little guys" when it comes to rights disputes.

But in reality there is no spite nor evil there, and the perceived imbalance between users is the result of the way domestic and international laws and agreements are written -- for example the DMCA -- these are complex issues and legal edicts with which Google must abide.

This is not to suggest that improvements in YouTube's usually automated and rather officious DMCA claim/counterclaim system wouldn't be welcome, certainly. But Google is significantly legally constrained in flexibility in these regards, and an unbiased, longitudinal examination will show that major improvements have been deployed in these YouTube processes, especially for individual uploaders, particularly over the last few years.

Yet beyond all this is the foundational truth that without Content ID and copyright strikes, without the YouTube Terms of Service and claim forms and all the rest -- including all the great people at YouTube/Google who work their asses off (tech-wise and policy-wise) to keep it all going -- YouTube as we know it today likely could not exist at all, and much of what we find so wondrous there would blink out of existence like a shooting star crossing the horizon.

There's no major moral to this post today, well, except perhaps this ...

In a time of fascist politicians spouting simplistic slogans about race, religion, terrorism, and censorship, along with whatever other pandering platitudes they believe will win them votes, prestige, power, and control -- it's worth remembering how much good the Internet brings us, and how much poorer we'd all be in so many ways for the shackling of Internet services like YouTube, in the name of such self-serving proclamations and damaging false solutions.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 12:52 PM | Permalink

January 21, 2016

Action Item: Protecting Ourselves from Encryption Backdoors

Yesterday, in "The Politicians' Encryption Backdoor Fantasies Continue -- and Legislating Pi" ( http://lauren.vortex.com/archive/001147.html ), I discussed moves in the U.S. Senate to convene a commission to proceed toward their fantasy goal of finding a way to backdoor strong encryption algorithms "while still protecting the privacy of honest users."

As I noted then, this is an impossible task, since the very act of building backdoors into these algorithms (ostensibly for law enforcement and intelligence needs) would make these encryption systems exceedingly vulnerable both to "official" abuse and vast third-party black-hat hacking attacks -- including by terrorist groups and other criminals -- who of course for themselves will continue using easily available strong crypto systems without backdoors.

I viewed the call for an encryption commission to be essentially a smokescreen for moving toward the government's ultimate goal -- being able to read all encrypted communications upon demand.

Within hours of my posting yesterday came word that there's already a bipartisan move in the senate to not bother with any commission, to just move directly to legislation mandating law enforcement access to encrypted communications. Period.

I rest my case -- smokescreen proven. Q.E.D.

Whether or not such legislation passes immediately is not really the point, because ultimately the odds are very high that sooner or later something like it will become law here in the U.S. -- and likely in many other countries as well. Not just the obvious suspects like Russia and China, but in the EU also, which constantly speaks out of both sides of its mouth when it comes to privacy and surveillance issues.

So sometime soon -- be it one year, or two, perhaps a bit more if such laws become entangled in court cases (as seems likely), we will be facing the reality of strong, end-to-end encryption essentially being outlawed, at least in the context of the major Internet services that most of us depend upon.

These are the firms that government is currently most concerned about -- Google, Apple, Microsoft, and more -- who have been moving rapidly and correctly to provide their users with strong crypto (e.g. on smartphones) that even the firms themselves can't crack. Such moves have been triggered in large part by the continuing parade of government overreach when it comes to accessing the data in these devices.

Also, these same services have been moving toward providing stronger crypto for their centralized "cloud" services as well, including "only the user holds the keys" encrypted file/data storage systems.

All of these services and more will likely be targeted by government encryption backdoor legislation in coming months and years.

The question is, what are we going to do about it?

Or first, a different question.

Do we care?

The pro-backdoor argument runs something like this ...

Bad guys use encryption (to some extent not clearly known, but expanding). Government can't monitor their communications to prevent or solve terrorist attacks or other crimes (child pornography is frequently mentioned in the latter category) without access to that data. The risks and potential loss of privacy that honest users face from backdoors in these systems for law enforcement and intelligence use is the price we have to pay for living in a 21st century society.

If you're in the category just described, you likely need not read any further in this post.

The counter-argument is that serious bad guys will quickly move (if they haven't already) to crypto systems that don't have backdoors, leaving mainly honest users on the compromised systems.

Encryption experts and computer scientists are in virtually unanimous agreement that any attempts to backdoor these systems weakens them in fundamental ways, making them massively vulnerable not only to government abuse and demonstrated ineptitude (such as permitting the personal info of millions of persons to be obtained by crooks from government computers), but also hacking attacks of all sorts, including by criminal gangs and worse. With so much of our financial and personal information now online -- whether we all like that or not -- purposely weakening encryption systems for honest users is intolerable.

If you're in this second camp -- as am I -- we're back to the "What do we do about it?" question.

And actually, the answer is quite clear. Data that is already encrypted when it is stored or shared, using strong encryption systems that are validated to not contain backdoors (a much tougher validation task than laymen might assume) is not subject to the sorts of backdoor snooping or backdoor hacking exploitations as would be data encrypted on systems mandated to contain backdoors.

Perhaps even more to the point, government still has ways to target particular criminals or other evildoers when they really need to -- in particular through "endpoint" surveillance of various sorts directly on targeted PCs. But generally speaking, backdoored centralized crypto systems represent much greater risks related to mass abuse, mass hacking, and mass surveillance. And this holds true irrespective of how "clever" proponents try to be about splitting up encryption keys and the various related key handling processes.

So honest, good users who feel that they deserve at least the same level of encryption protection as bad, evil users will need to be ramping up their own use of strong encryption systems locally, so data that doesn't need to be stored unencrypted in central services for processing is encrypted in ways that backdoors cannot typically penetrate.

Which data will fall into this category will be largely an individual choice, of course. Cloud environments provide immense value in a vast number of ways -- email systems, file searching, document creation and editing -- on and on. Most of these -- given current tech, anyway -- require that data be unencrypted in the cloud so that it can be processed for the user. On the other hand, for end-to-end communications -- say from one phone user to another, or between users in various other contexts, the need for central processing of those messages -- other than passing them along encrypted as they are -- will often be nil. So central systems in these circumstances become the conduits of data that they do not need to decrypt nor interpret.

A bitter irony is that while some terrorist groups seem to have all manner of sophisticated and relatively standardized strong encryption systems that government backdoors are unlikely to reach, ordinary honest users are faced with a confusing hodgepodge of crypto systems that are generally hard to use, often incompatible, and basically just a pain in the neck that discourage their widespread adoption, especially by non-techies.

The relatively straightforward bottom line?

Given the quite reasonable assumption that mandated encryption backdoors legislation targeting large Internet services is very likely coming -- exact timing unclear, but on the way -- efforts need to be expanded right now toward making personal encryption systems that can run on users' local computers as simple, reliable, automatic, and ubiquitous as possible.

Not to shield evil. Not to mask criminals and terrorists.

But simply to protect the good guys. The rest of us. You and me.

And that's a fact.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 11:30 AM | Permalink

January 20, 2016

The Politicians' Encryption Backdoor Fantasies Continue -- and Legislating Pi

"I Got You Babe."

I've written about law enforcement, politicians, and their hopeless fantasies of "safe" encryption backdoors so many times -- and have become so disgusted at the endlessly repeating nature of the situation -- that I really do feel like I'm hearing that old Sonny and Cher song in much the way Bill Murray did in his 1993 classic film "Groundhog Day" -- again, and again ... and yet again.

But the crypto backdoor "hits" just keep on comin' -- and today is no exception.

Now comes word that a bipartisan pair of lawmakers is introducing federal legislation to establish a national commission to figure out "how police can get at encrypted data of honest citizens without endangering those citizen's privacy at the same time."

The usual slogans are being bandied about: "What we're trying to do is get that collaboration started," said Sen. Mark Warner (D-Va.), who joined [Republican] McCaul on the call and will sponsor the upper chamber bill. "Let's get the experts in the room."

We keep hearing stuff like this from the usual suspects. Just get those brainiacs together! Get "Bill Gates" working on it! Lock all those liberal LGBT-lovin' software engineers in a basement somewhere and they'll solve the problem. Or they'll never eat pizza again!

OK, that's more of the GOP line. The Democrats also pushing crypto backdoors are wording it a bit differently.

Though some crypto backdoor proponents have laudable motives, the end result must be the same.

Perhaps "Scotty" from the original "Star Trek" said it best, when he noted that the laws of physics were immutable ("I can't change the laws of physics") -- or, I might add, of mathematics.

Not that politicians haven't tried to break these laws before. As far back as 1897, the Indiana legislature came very close to passing legislation that would have had the effect of setting the transcendental ("never ending") value of the constant "Pi" to an incorrect and fixed 3.2.

So the fact that politicians and law enforcement continue to try bend physics, math, and computer science to their wills -- irrespective of the realities -- should come as no surprise.

Any attempt to backdoor strong encryption systems will by definition make them immensely vulnerable not only to abuse by authorities, but also to outside hacking -- including by sophisticated terrorist groups! -- that would put all honest users at immense risk as ever more of our financial and other aspects of our personal lives are online.

It doesn't matter if you break up the backdoor key into a thousand pieces and distribute them to Boy and Girl Scouts sworn to only use them in a national emergency.

The mere act of creating any backdoor to these systems weakens them enormously and catastrophically. Even Einstein wouldn't be able to change that. And he'd be far too intelligent to ever try.

Yet, most of the law enforcement officials and politicians pushing for these "meetings of the experts" on backdoored encryption aren't actually stupid either.

So what's really going on?

In my view, most of them already realize that they would have to fundamentally weaken crypto to get backdoors, and that the industry overall quite rightly will never voluntarily go along with doing that.

Google, Facebook, Apple, and the others will be polite -- as they should be -- but will not willingly betray the security and privacy of their users with encryption backdoors.

So the odds are that what's actually going on currently with the "voluntary" backdoor crypto push is essentially a smokescreen.

It's an attempt to provide political cover for the next step, when proponents begin the "well, we tried to get cooperation first!" push for legislation that would mandate backdoors in these USA crypto systems, whether the firms want to do it or not, and irrespective of the risks to honest users.

Nor will the fact that strong encryption systems from firms outside the U.S., and from independent third parties, will continue to be available and will be the encryption systems of choice for terrorists and other criminals, who won't willingly make use of backdoored crypto once the word gets around.

This suggests that ultimately it's mostly a game of political cover, of politicians being willing to massively weaken the security and privacy of us all to ensure themselves an excuse to spout at the press when bad things happen.

That sort of attitude is sad. And depressing. And so very, very wrong.

And about as realistic as declaring Pi to be 3.2.

I think I'm going back to bed ... ... ... ... ...

"Then put your little hand in mine.
There ain't no hill or mountain,
We can't climb.
I got you babe.
I got you babe ..."

- - -

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 09:41 AM | Permalink

January 19, 2016

Understandable but Very Wrong: Google Enables Government YouTube Censorship in Pakistan

Literally within hours of the horrifying and sickening news of a 15-year-old boy in Pakistan who cut off his own right hand after he was the target of hysterical false accusations of blasphemy, comes word that Google -- in a successful bid to get a three year YouTube ban in Pakistan lifted -- will be permitting government officials in that country -- apparently all the way down to the local level -- essentially unfettered rights to censor and block individual YouTube videos from view in Pakistan.

This is an enormously troubling development for free speech advocates around the world, particularly because it's impossible to overlook the relationship between the boy's actions and the upcoming Pakistan/YouTube censorship system.

The powers being ceded to the government there to censor Google at the individual YouTube video level -- arguably even worse than the EU's awful "Right To Be Forgotten" (RTBF) scheme -- continues our acceleration down the slippery slope of permitting governments to demand rights to micromanage information for their own political benefit and the personal enrichment (politically and in some cases financially) of their leaders and other politicians.

I like to think of myself as a "responsible" free speech advocate. That is, I strongly assert the importance of free speech, but acknowledge that sometimes, in carefully delineated circumstances that must be minimized as completely as possible, some restrictions are necessary.

So, for example, I generally strongly support Google/YouTube's global Terms of Service that prohibit videos that are directly violent -- such as videos that show physical abuse of people or other animals.

And I have nothing but respect for the Google policy and legal teams that must deal with these complex multinational situations. Similarly, the work done by Google engineers on politically neutral abuse detection systems and that of the human teams that help apply YouTube anti-abuse rules are also all exemplary.

I've explicitly noted the exceptional circumstances of videos that incite terrorism, e.g., recently in my discussion of "A Proposal for Dealing with Terrorist Videos on the Internet" ( http://lauren.vortex.com/archive/001139.html ).

But in Pakistan the concepts of (for example) blasphemy and government control are intertwined -- accusations of the former are frequently used for purposes of the latter -- and any discussions that the government there feels are blasphemous (by their own broad and self-serving definitions) -- or speaking out against the government in any manner -- are key targets for abusive censorship.

With Google now explicitly buying into this censorship regime as the price of removing an overall Pakistan block on YouTube -- and note that the Pakistani government apparently will be setting the standards under which YT videos will be judged in violation -- the situation in my view becomes much worse for the population there than would be the case without access to YT at all (yes, we know that some relatively small number of people have always gotten through with VPNs and proxies, but that's largely irrelevant to the overall population).

The Pakistan version of Google-enabled national censorship isn't as straightforward as say, a relatively "simple" ban against Nazi memorabilia-related materials in France. In Pakistan, Google has become much more of a direct partner in the government's very broad, politically-motivated and personally suppressing censorship actions.

The kind of YT censorship that will be enabled in Pakistan is much more akin to how China censors its population -- where what will or will not be allowed to be seen in any media is carefully chosen and restricted to promote the government line and muzzle dissenting points of view.

I absolutely understand the pragmatic realities of having to obey laws in those countries in which Google chooses -- voluntarily -- to operate, but I find the newly announced and apparently Google-endorsed government controls over YouTube content in Pakistan to be extremely disturbing, and a horrific precedent for other countries going forward.

Everyone everywhere who is concerned about the responsible exercise of free speech should be alarmed at these developments.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 09:48 AM | Permalink

January 17, 2016

Despite Politicians, the Internet Can Save Us from Ourselves

Yesterday evening I discovered a fault in a rather complex piece of electronics apparatus here. It was not obvious to debug and I'm not an expert regarding that particular circuitry. I posted a query on a specialized Web forum that is devoted to discussing that device, and went to bed.

This morning there were several responses, and then ensued a flurry of forum messages between those unseen correspondents and myself, including magnified photos of circuit boards with overlaid circles and arrows and suggestions.

By early afternoon I had enough info to grab a soldering iron, flux, and magnifier, and attempt a patch of the board. It worked.

I posted a photo of my soldered fix for future folks with the same problem, thanked everyone, and the world kept spinning.

There might not seem to be a deep philosophical aspect to this story except for one thing. I'm sitting here in L.A. My helpful correspondents were in Canada, Germany, and the Russian Federation.

In these kinds of discussions, you're just as likely to find engineers and hobbyists chiming in to help from Japan, Iran, Pakistan, India, or Saudi Arabia as well -- literally from anywhere the Internet touches.

Persons scattered around the globe, sometimes in widely different cultures and national frameworks whose governments may even be distinctively antagonistic toward each other --- persons all bending over backwards to help each other as individuals to solve technical and other problems.

I sit here in the San Fernando Valley and I feel distinct anger about this.

Not anger about these great people around the world -- certainly not.

Rather, it's anger toward our leaders and politicians who endeavor to make us think of other countries and other cultures as homogeneous wholes -- not as actual individual beings with far more commonalities than differences by virtue of our shared humanity.

Don't trust the Iranians. Blame the Americans. Eliminate the Shias. Hate the Russians. Throw out the Muslims. On and on, ad nauseam -- the siren calls of politicians seeking to set us all at each other's throats for the sake of their own aspirations and glory.

To be sure, many of these politicians and leaders have come to despise key aspects of the Internet. They know all too well that the Net -- by facilitating the kinds of one to one communications that belie their hateful and manipulative slogans -- threatens to undermine their propaganda and controls.

So there are the ever increasing calls for Internet censorship, nowadays wrapped in an ostensible veneer of fighting terrorism, but in reality only the steppingstones toward the political goals of governments' comprehensive communications control, with the vast people power of the Internet muzzled and flogged into tattered submission.

I still recall from decades ago the very first time I communicated -- in that case via a typing link -- with someone in another country over the Net. I was sitting at a greenly glowing CRT terminal in the UCLA ARPANET basement computer room, and at the other end of the ARPANET connection was a military officer in Norway.

So we dealt with our technical issues, and then chatted about the weather and movies for a while, and then typed our goodbyes.

Much as I sit here now, I sat at that table afterwards with the roaring fans of the minicomputers around me and pondered what had just transpired.

I thought -- "My God, this could change the world so much for the better." And I remember then adding to myself -- "If the politicians don't f*** it all up."

Now all these years later, well into the 21st century, I worry about the latter aspect of my thoughts back then more than ever -- for many politicians indeed seem hell-bent in doing exactly what I had feared.

Yes, the threats of terrorism are real. Yes, the Internet is used for evil purposes as well as good ones. Yes, there will always be some extremely egregious, directly violence-inducing materials on the Net that we need to try control -- but not at the cost of undermining the Internet's greatness.

For if there's one lesson I've learned spending my entire adult life watching the Internet grow from a few IMPs, Teletypes, and dial-up modems into the vast wonder it is today, it's that with very few exceptions the cure for problematic information is not restrictions on the ability to communicate.

Rather, we need more and better information, more communications, more one-to-one contact between individuals mutually connected by the largely unseen instrumentalities of fiber cables, routers, data centers, and Wi-Fi signals around planet Earth.

And this holds true whether we're looking for help with a personal problem, assistance with an errant circuit board -- or if we merely seek to try save the world from mutually assured destruction.

If we permit our leaders and politicians to continue building their walls between us, walls figuratively electronic, or physically brick and mortar and steel, we will have squandered the Internet's promise as a tool for the benefit of humanity.

We can surrender to fear and demagoguery, or we can grasp the nettle and do our utmost to assure the Internet's place in a bright future for the world, rather than standing by and permitting it to be twisted into a tool of political censorship and governmental oppression.

It's ultimately up to us.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 05:24 PM | Permalink

January 14, 2016

How Accessibility Standards Enable Poor User Interfaces

I've been spending a lot of time recently on issues related to the accessibility of websites. This continues to be at or near the the top of queries I get regarding the Internet in general, and of Google in particular (because most queries on all Internet topics I receive tend to relate to Google, one way or another).

I've attempted to give some flavor of the frustrations people send me on these topics, including in some of my relatively recent postings, including among others:

UI Fail: How Our User Interfaces Help to Ruin Lives

The Three Letter Cure for Web Accessibility and Discrimination Problems

The observant reader might wonder ... how can this situation persist? Why aren't there accessibility standards for websites?

In fact, there are such standards.

But the irony is that by encouraging a "one size fits all" view of user interfaces -- typically with few or no user control options, such standards can provide an excuse for not making interfaces more customizable, more targeted to users with particular needs, and overall better than what the standards provide for.

So, to take one case, we have low font contrast -- you know, the dim gray letters on a gray background problem (at least when viewed with aging or otherwise imperfect eyes). This is one of the issues most commonly mentioned to me regarding Google products, e.g. in the new Google+ (where some text is distinctly dimmer and harder to read than in legacy G+).

Is Google just pulling these fonts and backgrounds out of a hat?

No, they're not.

In fact, when you discuss this issue with Google directly (and Google has reached out to me for such discussions on accessibility issues -- many thanks!) you will be told that the fonts and backgrounds in question all pass WCAG 2.0 guidelines.

Odds are you've never heard of WCAG -- the W3C Web Content Accessibility Guidelines.

They come in three compliance levels, reminiscent of battery sizes: A, AA, and AAA.

Google asserts that (for example) the new G+ passes the compliance checkpoints for W3C WCAG 2.0 AA -- pretty much the end of the story from their standpoint.

And I have no reason to doubt that they do pass those compliance tests.

But is WCAG magical? A gift from the heavens? The last word in creating excellent user interfaces that play well with users of widely varying needs?

Of course not.

In fact, W3C WCAG is something of a Godzilla of standards, that hardly anybody really likes.

It's relatively big and complicated. Accessibility advocates often complain that it doesn't go far enough and doesn't provide for adequate customization by users.

Web services complain that it's unyielding and difficult to implement correctly.

And they're all correct.

It's a classic example of design by committee, resulting in a complex mess that doesn't serve anybody particularly well.

I'm just one guy sitting here alone in L.A. I can listen to frustrated users, I can compile their concerns, I can make suggestions.

But I can't fix any of this stuff by myself.

Yet we damned well better find a way to find and deploy realistic fixes. And soon. Because (among other things) an aging population demographically means ever more users with accessibility needs that are often not being met by today's user interfaces.

And as frustration turns to anger, the probabilities increase of government getting more directly involved in this area, with bureaucrats calling the shots. Oh goody.

We need to deal with this now, ourselves -- or government is going to address this complicated area with their usual delicate finesse, that is, the "bull in a china shop" approach.

Guess who will have to pick up the pieces afterwards?

Yeah, all of us.

And that's the truth.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 01:20 PM | Permalink

January 08, 2016

Social Media and Terrorism and Ourselves: The Post I Didn't Want to Write Tonight

I had not planned to post this item this evening. I actually started on it earlier today, but put it aside for another time. It's Friday, I'm tired, and the topic is just too depressing.

But when I flicked on the television a few minutes ago, I saw CNN covering the live spectacle of a Muslim woman in a hijab, who stood silently wearing a yellow star labeled Muslim, being evicted from a Donald Trump rally by a boisterous crowd of Trump supporters who would have fit right in during 1930s Germany.

And so I've pulled my depressing text back up in Google Docs, and I'll finish it here and now.

How many billions of words have been written about terrorism since 9/11? I wouldn't want to wager a guess, even while I'll admit that likely some tens of thousands or more of them -- a drop in the proverbial bucket -- have been written by yours truly.

Over the years since there has been a marked change in the perceived terrorist threats against Western countries -- a transition from mass, directed attacks to "self-radicalized, lone-wolf" attacks, and an alarming attempt to cast the Internet in general -- and social media in particular -- as being especially complicit in the rise of the latter terrorist type.

So Western governments (and other governments too, using the West's reactions as an excuse for their own tyranny), have increasingly argued that if somehow Internet social media could be "controlled" -- if there were no way for the mentally ill, criminals, and the simply disenfranchised members of society to not view radical videos, not see radical websites, these problems could be massively lessened or even perhaps eliminated.

Even ignoring for the moment that much radicalization takes place inside prisons themselves, the view that choking off the more violent, angry, or even more subtly propagandistic aspects of terrorist-related social media would even make a dent in the rise of lone-wolf, self-radicalized terrorist attacks is a horrifically and dangerously incorrect idea.

And that quiet Muslim woman with the yellow star, being thrown out of a Trump rally this evening to the delight of Trump's screaming fans, is all the proof we need.

For self-radicalization -- lone-wolf terrorism -- does not require sophisticated technology. It does not require strong encryption systems that governments around the world would subvert to the detriment of their own law-abiding citizens.

The kind of terrorism on the lips of politicos and law enforcement these days requires but two basic elements -- weapons, and not necessarily elaborate ones at all -- trivially obtained in a country awash in gun shops and hardware stores -- and the second element, simple anger of sufficient intensity.

I believe I'd be accurate in asserting that the images and sounds recorded of the eviction of that Muslim woman from Trump's fascist lovefest this evening -- already winging their way around the world's news media -- will provide more inspiration to more lone-wolf terrorists than any 100 terrorist-produced videos or terrorist propaganda websites.

It's not just Trump, of course, though at the moment he is clearly the leader of the fascist pack. It's the messages of hate that now pervade conventional media -- including mainstream news organizations. FOX News revels in it. CNN is only relatively better. And it's much the same on other outlets both in the USA and around the world.

The message to marginalized persons is that we hate them. They don't belong. They're inferior. They should carry special identification. They should be rounded-up, evicted, blocked, spied upon, and spit on.

And ironically, these messages don't simply fire up radical Islamic domestic terrorism, they energize the far more prevalent racist, white supremacist, fascist domestic terrorists who view killing a Muslim with the same joy they traditionally reserved for lynching blacks.

This is why attempts by our governments to lay false blame for terrorists upon the Internet -- the government's all too obvious iron fists only casually covered with velvet gloves -- alternately cajoling and threatening the social media firms like Google, Facebook, Twitter, and others -- are virtually all bogus accusations doomed to failure.

You could shut down every Internet social media site on the planet, and terrorism would continue because hate and anger would continue -- in fact, they would likely accelerate faster as a result.

Even in countries where the news media is tightly controlled -- like China and Russia -- it's impossible to prevent the people from ultimately learning what's actually going on.

And what's going on is mindless hatred, racism, bigotry, and fascism -- and so the resulting terrorism as well -- being energized by many of the men and women who would have us anoint them as leaders of our nations.

It's worse than madness, it's suicidal. The problem isn't the Internet, it's ourselves. We're all to blame, one way or another.

I feel ill.

Try to have a good weekend, fellow monsters.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 08:38 PM | Permalink

T-Mobile's CEO John Legere and the Big Lie About Internet Video

Buried among his recent expletive-laden rants against Google, EFF, and everybody else who doesn't agree with him, T-Mobile USA's CEO John Legere has explicitly claimed that he has a "propriety technology" to detect video streams and deal with them specially -- according to EFF, essentially by just slowing them way, way down and creating a terrible user experience for many viewers (see: "T-Mobile's Tampering with Video Is Bad for Everyone, Not Just Google" - http://lauren.vortex.com/archive/001141.html).

So let's think a bit about what his ostensible "proprietary technology" might be, given that most video streams these days are in encrypted SSL/TLS data channels.

Now, I doubt very much that John has cracked SSL/TLS, nor is even he likely insane enough to be attempting man-in-the-middle attacks on encrypted communications.

So what other possibilities have we?

One would be that John is basing his assumptions about identifying video on the source of the data. For example, if he sees traffic sourcing from an ip address that he thinks can be traced back to YouTube, he declares that data to be video.

But such assumptions can become problematic very quickly.

Content distribution on the Internet these days is very complicated, often involving arrays of shared NAT addresses, CDNs (Content Delivery Networks), and an array of other complicated techniques ("twisty passages, all different" galore, to paraphrase the old ADVENTURE game).

As a result, using the source ip address as a video indicator is very much a guessing game, with a likely very high error rate and resulting inaccurate categorizations.

But hey, it's quite possible that John is happy to live with such error rates, especially given that inaccurately tagging a non-video stream as video is to T-Mobile's benefit under his data slowing scheme.

There's another possibility though, that is arguably the most likely of all. John may be simply looking at the amount of data that appears to be coming through connections and declaring to be video the ones that seem to contain significant amounts of "continuous" data, on the assumption that they're the most likely to be video streams.

In the surveillance terms of NSA and their various global counterparts, this would be considered to be a rudimentary form of "traffic analysis" (in fact, the analysis of ip source addresses I mentioned earlier would also fall into this category) -- that is, attempting to derive useful information from patterns of traffic flow, even when you can't decrypt the actual payload contents of encrypted communications.

And in fact, there is already anecdotal evidence that relatively large non-video data transfers are now seeing slowdowns on T-Mobile, exactly what one would expect if aggressive, incorrect categorizations of encrypted data as video are occurring.

The bottom line appears to be that T-Mobile may have been caught in a lie, and when they were called out on it, their CEO let loose an array of obscene, incoherent rants like some sort of nightmarish telecom industry incarnation of Donald Trump.

Unfortunately, this leaves in something of a bind the legions of T-Mobile USA customers, many of whom moved to T-Mobile specifically because they despised the various practices of AT&T, Verizon, and Sprint. Thanks to the tiny oligarchy of mobile carriers here in the U.S., we seem to be well and truly screwed.

There are various innovative mobile service resellers, like Google's Project Fi and others, but these are ultimately still dependent on the network infrastructures of the major carriers. Local mesh networks have yet to prove practical for non-techie users. And when you're not in range of a usable public Wi-Fi access point, you need a mobile carrier.

Most of the payphones are gone. Unreeling very, very, very long telephone extension cords from your car trunk on the road back to a home landline connection seems iffy at best.

Yep, the way things are going, we probably should at least be researching techniques to help keep that string taught between tin cans over several thousand miles.

Be seeing you.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 11:44 AM | Permalink

January 04, 2016

T-Mobile's Tampering with Video Is Bad for Everyone, Not Just Google

It's been clear for years that major early battles over net neutrality would likely be in the video delivery realm.

With the rise and expansion of Internet video streaming services like YouTube, Netflix, and many others, major percentages of overall Internet bandwidth are now used to delivery video streams to users, especially in their local evening hours.

Concerns about the neutrality of ISPs (whether land or mobile based) have long been simmering on at least two fronts.

One of these is bandwidth caps and limits -- how much Internet data a user may "consume" before they're blocked, throttled, or charged extra by their landline ISP or mobile carrier. Since video, especially higher quality video like HD (or now also 4K) can use a lot of data, this is a big deal to consumers -- and to video stream providers.

It gets even more complicated when dominant ISPs establish their own video services whose use does not count against users' bandwidth caps, even though such services compete directly with those of outside firms where usage does count against those same bandwidth caps. The related "fox guarding the hen house" analogies are straightforward to understand.

Current controversies regarding T-Mobile's new "Binge On" service can be more complicated to explain, because they combine key aspects of bandwidth cap issues like those mentioned above, with another aspect entirely -- T-Mobile is apparently actually tampering with outside services' video streams and slowing them down.

Google's YouTube has been particular vocal in expressing concerns about this, and with excellent reasons.

Because what T-Mobile is doing threatens fundamental precepts of net neutrality that are crucial to avoid Internet consumers from being -- frankly -- shafted, whether they realize it at the time or not.

We could very quickly get bogged down in a fascinating (well, fascinating to me) deeply technical discussion of video streaming systems, codecs, transcoding, formats, content delivery networks, and other aspects of the instrumentalities that bring Internet video to your computer, TV, tablet, or phone.

But for now I'll just put it this way ...

Getting quality video to your screen with smooth motion, and without freezes, the squarish mosaics of pixelization, smearing, breakups, and the other multitude of ways that Internet-based video can be disrupted, is immensely complex.

There are many endlessly changing variables involved, including performance characteristics of the user's device, speed of their Internet connection, buffering on their connection, characteristics of the connections between the user and the video streaming service -- a long and complicated list.

To make all this work, streaming services depend on knowing that the data they send to users is the data those users will receive, at the expected speeds and without slowing or modifications by third parties.

Once such a third party arbitrarily tampers with the video streaming experience, all bets on performance and quality for viewers are off the table.

The video service no longer knows what data will reach the user and in what form and speed, nor can it any longer necessarily depend on the accuracy of the metrics that video player software send back so that the streams can be adjusted in real time to maximize performance for the viewer.

Small wonder that Google is upset about what T-Mobile is doing. If I was running an Internet video streaming service, I'd be damned angry about it.

Obviously, I'm not in a position to negotiate a solution to the current situation regarding T-Mobile.

But from my standpoint, this kind of T-Mobile saga is a very disappointing and worrisome (though not unexpected) development -- especially for someone who has long been concerned about exactly these kinds of scenarios shredding the net neutrality concepts that have been crucial to the development of the Internet and the ability of new players to compete.

Remember, if ISPs and mobile carriers feel that they can manipulatively inject themselves without penalty or proscription into the data transactions of outside services and their users, it likely won't be very long at all before we see similar large-scale tampering with non-video Internet data becoming the norm as well.

And then we'd all be the chickens at the mercy of the fox.

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 06:58 PM | Permalink

December 24, 2015

Wishing on a Drone: Analyzing the U.S. Air Force's New "Portable Hobby Drone Disruptors" Solicitation

One thing is certainly clear. Governments around the world are having a very difficult time coming to grips with a technological reality. Inexpensive and powerful hobby drone systems, that can be trivially purchased -- or be assembled from scratch using commodity parts and open source firmware -- are not going away. In fact their proliferation has only begun, and -- like it or not -- there are no effective means available to control them.

Yes, the potential for serious drone accidents -- and even attacks -- is real. But so far, the suggested approaches to dealing with this reality seem more out of a Disney fantasy film than anything else.

Not that governments aren't trying.

Here in the U.S., we have the new FAA hobby drone registration requirement, which won't prevent a single drone incident (and bad actors will never register or accurately register), but will present a potential privacy mess for law-abiding citizens -- the FAA has now admitted that names and physical addresses of registrants will be publicly accessible online via their database. More on this at my earlier blog entry:


Over in Japan, they're talking about trying to use bigger drones with nets to try capture hobby drones. I'm not kidding! I'm picturing the attack drones and target drones getting all tangled up together in the nets and plummeting to earth to hit whatever is unfortunate enough to be underneath. Ouch. Seems like a concept from "Godzilla vs. Dronera" to me. (Hey, Toho, if you use this idea, I want a royalty!)

But the more direct, military approach is also in play.

The U.S. Air Force has just issued a solicitation for a radio-based "Portable Anti Drone Defense" system -- essentially a remote drone disruption device that can be easily used by someone familiar with -- well -- shooting guns. The Air Force wants three units to start with. Delivery required 30 days after awarding of the contract.

You can learn all about it here:


It does indeed make for interesting reading, and I thought it might be instructive to dig into the technical details a bit.

So here we go.

The requirement specifically is addressed to the disruption of commercially available personal drones. This appears to implicitly admit that self-built drones (built from easily available commodity parts as I noted above) may represent a more problematic target category.

In practice though, even commercially available drones will often be running altered and/or open source firmware, making their behavior characteristics less of a sure bet (to say the least).

A key attribute of the Drone Disruptor is that it be able to interfere with drone operator communications links in the 2.4 and 5.8 Ghz unlicensed bands.

These of course are the same bands used for Wi-Fi, and are indeed the most common locations for hobby drone comm links. (More advanced hobbyists also may control their drones through ground station links in the 433 Mhz and/or 915 Mhz bands, but who am I to tell anything to the Air Force?)

Another key bullet point of the solicitation is the ability to interfere with the GPS receivers that an increasing number of drones use for Return to Launch (RTL) functions, and for fully autonomous "waypoint" flights that can proceed without any operator comm link active.

All of this gets really, seriously complicated in practice, because any given hobby-class drone can behave in so many different ways (both planned and unplanned) when faced with the sorts of disruptions the USAF has in mind.

The cheapest variety are usually completely dependent on the comm link for flight stability. Jam or otherwise disrupt the link, and they'll usually go crazy and come crashing down.

It's a taller order if you want to actually take over control of such drones, since you need to have a compatible transmitter and a way to "bind" it to the receiver. Not impossible by any means, but a lot tougher, especially if a drone is unstable during the comm link attack process.

More sophisticated hobby drones can be programmed to do pretty much anything in the case of their comm link being interrupted or tampered with. They might be configured to just "loiter" in position, or more commonly to activate that RTL -- Return to Launch -- function that I mentioned (yes, handy if you want to trace a drone back to its point of origin).

But many hobby drones now include sophisticated GPS receivers and magnetometers (that is, electronic compasses) -- and sometimes more than one of either or both for flight control redundancy.

This is obviously why the USAF solicitation includes GPS jamming requirements (it doesn't mention anything about magnetometers).

Here again though, how any given drone will react to such interference is difficult to predict with any degree of accuracy, especially if it isn't running the firmware you presume it does (and we know even commercial drones with restricted firmware will be "rooted" and "jailbroken" to run "unapproved" firmware without restrictions, often by users just to prove that they could do it.)

For example, in the case of GPS disruption, a drone could be programmed to simply fly away as far as it can using its magnetometer references. Even without reliable magnetometer readings, a drone could execute a "dead reckoning" escape plan using only its internal electronic accelerometers and gyros (even cheap toy drones now usually include three of each to deal with the required calculations for stable flight in 3D space).

What's more, at lower altitudes, a small, $100 laser ranging ("LIDAR") system can provide another source of internal control data.

If you weren't already familiar with the field of modern hobby drones, your reaction to this discussion might understandably be something like, "Gee, I didn't realize this stuff had gotten so sophisticated."

But sophisticated it is, and becoming more so at a staggeringly fast rate.

The bottom line seems to be that while it's understandable that the USAF would wish for a portable magic box that can "shoot down" drones via radio jamming and other remote techniques, the ability of such a system to be effective against other than the "low hanging fruit" of less sophisticated hobby-class drones seems notably limited at best.

And that's a truth that all the "wishing on a drone" isn't going to change.

So if a drone shows up under your Christmas tree, please do us all a favor and fly it responsibly!

Merry Christmas and best for the holidays, everyone!

I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 03:10 PM | Permalink

December 21, 2015

A Proposal for Dealing with Terrorist Videos on the Internet

As part of the ongoing attempts by politicians around the world to falsely demonize the Internet as a fundamental cause of (or at least a willing partner in) the spread of radical terrorist ideologies, arguments have tended to focus along two parallel tracks.

First is the notorious "We have to do something about evil encryption!" track. This is the dangerously loony "backdoors into encryption for law enforcement and intelligence agencies" argument, which would result in the bad guys having unbreakable crypto, while honest citizens would have their financial and other data made vastly more vulnerable to attacks by black hat hackers as never before. That this argument is made by governments that have repeatedly proven themselves incapable of protecting citizens' data in government databases makes this line of "reasoning" all the more laughable. More on this at:

Why Governments Lie About Encryption Backdoors:

The other track in play relates to an area where there is much more room for reasoned discussion -- the presence on the Net of vast numbers of terrorist-related videos, particularly the ones that directly promote violent attacks and other criminal acts.

Make no mistake about it, there are no "magic wand" solutions to be found for this problem, but perhaps we can move the ball in a positive direction with some serious effort.

Both policy and technical issues must be in focus.

In the policy realm, all legitimate Web firms already have Terms of Service (ToS) of some sort, most of which (in one way or another) already prohibit videos that directly attempt to incite violent attacks or display actual acts such as beheadings (and, for example, violence to people and animals in non-terrorism contexts). How to more effectively enforce these terms I'll get to in a moment.

When we move beyond such directly violent videos, the analysis becomes more difficult, because we may be looking at videos that discuss a range of philosophical aspects of radicalism (both international and/or domestic in nature, and sometimes related to hate groups that are not explicitly religious). Often these videos do not make the kinds of direct, explicit calls to violence that we see in that other category of videos discussed just above.

Politicians tend to promote the broadest possible censorship laws that they can get away with, and so censorship tends to be a slippery slope that starts off narrowly and rapidly expands to other than the originally targeted types of speech.

We must also keep in mind that censorship per se is solely a government power -- they're the ones with the prison cells and shackles to seriously enforce their edicts. The Terms of Service rules promulgated by Web services are independent editorial judgments regarding what they do or don't wish to host on their facilities.

My view is that it would be a lost cause, and potentially a dangerous infringement on basic speech and civil rights, to attempt the eradication from the Net of videos in the second category I noted -- the ones basically promoting a point of view without explicitly promoting or displaying violent acts. It would be all too easy for such attempts to morph into broader, inappropriate controls on speech. And frankly, it's very important that we be able to see these videos so that we can analyze and prepare for the philosophies being so promoted.

The correct way to fight this class of videos is with our own information, of course. We should be actively explaining why (for example) ISIL/ISIS/IS/Islamic State/Daesh philosophies are the horrific lies of a monstrous death cult.

Yes, we should be doing this effectively and successfully. And we could, if we put sufficient resources and talent behind such information efforts. Unfortunately, Western governments in particular have shown themselves to be utterly inept in this department to date.

Have you seen any of the current ISIL recruitment videos? They're colorful, fast-paced, energetic, and incredibly professional. Absolutely state of the art 21st century propaganda aimed at young people.

By contrast, Western videos that attempt to push back against these groups seem more on the level of the boring health education slide shows we were shown in class back when I was in elementary school.

Small wonder that we're losing this information war. This is something we can fix right now, if we truly want to.

As for that other category of videos -- the directly violent and violence-inciting ones that most of us would agree have no place in the public sphere (whether they involve terrorist assassinations or perverts crushing kittens), the technical issues involved are anything but trivial.

The foundational issue is that immense amounts of video are being uploaded to services like YouTube (and now Facebook and others) at incredible rates that make any kind of human "previewing" of materials before publication entirely impractical, even if there were agreement (which there certainly is not) that such previewing was desirable or appropriate.

Services like Google's YouTube run a variety of increasingly sophisticated automated systems to scan for various content potentially violating their ToS, but these systems are not magical in nature, and a great deal of material slips through and can stay online for long periods.

A main reason for this is that uploaders attempting to subvert the system -- e.g., by uploading movies and TV shows to which they have no rights, but that they hope to monetize anyway -- employ a vast range of techniques to try prevent their videos from being detected by YouTube's systems. Some of these methods render the results looking orders of magnitude worse than an old VHS tape, but the point is that a continuing game of whack-a-mole is inevitable, even with continuing improvements in these systems, especially considering that false positives must be avoided as well.

These facts tend to render nonsensical recent claims by some (mostly nontechnical) observers that it would be "simple" for services like YouTube to automatically block "terrorist" videos, in the manner that various major services currently detect child porn images. One major difference is that those still images are detected via data "fingerprinting" techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds.

So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts?

I believe there are.

First -- and this would seem rather elementary -- video viewers need to know that they even have a way to report an abusive video. And that mechanism shouldn't be hidden!

For example, on YouTube currently, there is no obvious "abuse reporting" flag. You need to know to look under the nebulous "More" link, and also realize that the choice under there labeled "Report" includes abuse situations.

User Interface Psychology 101 tells us that if viewers don't see an abuse reporting choice clearly present when viewing the video, it won't even occur to many of them that it's even possible to report an abusive video, so they're unlikely to go digging around under "More" or anything else to find such a reporting system..

A side effect of my recommendation to make an obvious and clear abuse reporting link visible on the main YouTube play page (and similarly placed for other video services) would be the likelihood of a notable increase in the number of abuse reports, both accurate and not. (I suspect that the volume of reports may have been a key reason that abuse links have been increasingly "hidden" on these services' interfaces over time).

This is not an inconsequential problem. Significant increases in abuse reports could swamp human teams working to evaluate them and to make the often complicated "gray area" determinations about whether or not a given reported video should stay online. Again, we're talking about a massive scale of videos.

So there's also a part two to my proposal.

I suggest that consideration be given to using volunteer or paid, "crowdsourced" populations of Internet users -- on a large scale designed to average out variations in cultural attitudes for any given localizations -- to act as an initial "filter" for specific classes of abuse reports regarding publicly available videos.

There are all kinds of complicated and rather fascinating details in even designing a system like this that could work properly, fairly, and avoid misuse. But the bottom line would be to help reduce to manageable levels the abuse reports that would typically reach the service provider teams, especially if significantly more reports were being made -- and these teams would still be the only individuals who could actually choose to take specific reported videos offline.

Finding sufficient volunteers for such a system -- albeit ones with strong stomachs considering what they'll be viewing -- would probably not prove to be particularly difficult. There are lots of folks out there who want to do their parts toward helping with these issues. Nor is it necessarily the case that only volunteers must fill these roles. This is important work, and finding some way to compensate them for their efforts could prove worthwhile for everyone concerned.

This is only a thumbnail sketch of the concept of course. But these are big problems that are going to require significant solutions. I fervently hope we can work on these issues ourselves before politicians and government bureaucrats impose their own "solutions" that will almost certainly do far more harm than good, with resulting likely untold collateral damage as well.

I believe that we can make serious inroads in these areas if we choose to do so.

One thing's for sure though. If we don't work to solve these problems ourselves, we'll be giving governments yet another excuse for the deployment of ever more expansive censorship agendas that will ultimately muzzle us all.

Let's try keep that nightmare from happening.

All the best to you and yours for the holidays!

Be seeing you.

I have consulted to Google, but I am not currently
doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 11:51 AM | Permalink

December 14, 2015

Privacy Nightmare: Own a Drone? FAA Wants Your Credit Card Number

Oh goodie. The FAA has announced its ultra-rushed plan for a drone registry -- they desperately wanted to get this on the books before Christmas. It's worse than even the most vocal critics had anticipated:


Over the next 60 days, the FAA is requiring that anyone who flies drones outside (other than very small toy drones) must register on a web site (in theory paper-based filing is possible, but the FAA obviously anticipates most registrations to be over the web).

The FAA is also demanding your credit card number before you fly. In fact, they demand $5 via credit card every three years. Forever.

Even though the signup fee is waived for the first 30 days after Dec. 21 this year, the government still requires your credit card number for "verification" purposes. And because, hey, government agencies can never have enough credit card numbers on file.

No need to worry though, right? All that required personal information -- name, physical/mailing address, credit card data, email address, etc. will be in the warm embrace of a "third party contractor" who no doubt will take really good care of it to meet the abysmal security and privacy practices of the federal government.

The black hat hackers are already salivating over this one. Home addresses! Credit cards! "Hey comrade, do they ship Porsches to Moscow?"

Speaking of privacy, the FAA discussion of the privacy practices for this massive new database of personal information can best be described as exceedingly vague. Clearly it will be searchable on demand by various entities. Who exactly? For what purposes? What can they then do with the information obtained? Who the hell knows?

My guess is that illicit credentials for accessing aspects of this database will be floating around the Net faster than you can say "Danger, Will Robinson!"

The FAA admits that "bad actors" -- you know, the "drone terrorists" we keep being warned about or irresponsible drone pilots -- aren't likely to accurately register or to register at all. But hey, $5 and a bundle of personal info from all the honest drone owners every three years is a pretty good haul anyway. And it makes the government look like it's doing something about drone safety when in reality their plan isn't likely to prevent a single drone accident (or attack).

This is government operating in its maximal disingenuous mode -- creating massive new problems instead of presenting realistic proposals for solving genuine existing problems.

But we expected no less.

Oh, there is some good news. The FAA says you don't have to register your Frisbee.

Now isn't that nice?

Be seeing you.

I have consulted to Google, but I am not currently doing so.
My opinions expressed here are mine alone.

Posted by Lauren at 10:17 AM | Permalink

     Privacy Policy