December 24, 2015

Wishing on a Drone: Analyzing the U.S. Air Force's New "Portable Hobby Drone Disruptors" Solicitation

One thing is certainly clear. Governments around the world are having a very difficult time coming to grips with a technological reality. Inexpensive and powerful hobby drone systems, that can be trivially purchased -- or be assembled from scratch using commodity parts and open source firmware -- are not going away. In fact their proliferation has only begun, and -- like it or not -- there are no effective means available to control them.

Yes, the potential for serious drone accidents -- and even attacks -- is real. But so far, the suggested approaches to dealing with this reality seem more out of a Disney fantasy film than anything else.

Not that governments aren't trying.

Here in the U.S., we have the new FAA hobby drone registration requirement, which won't prevent a single drone incident (and bad actors will never register or accurately register), but will present a potential privacy mess for law-abiding citizens -- the FAA has now admitted that names and physical addresses of registrants will be publicly accessible online via their database. More on this at my earlier blog entry:

https://lauren.vortex.com/archive/001138.html

Over in Japan, they're talking about trying to use bigger drones with nets to try capture hobby drones. I'm not kidding! I'm picturing the attack drones and target drones getting all tangled up together in the nets and plummeting to earth to hit whatever is unfortunate enough to be underneath. Ouch. Seems like a concept from "Godzilla vs. Dronera" to me. (Hey, Toho, if you use this idea, I want a royalty!)

But the more direct, military approach is also in play.

The U.S. Air Force has just issued a solicitation for a radio-based "Portable Anti Drone Defense" system -- essentially a remote drone disruption device that can be easily used by someone familiar with -- well -- shooting guns. The Air Force wants three units to start with. Delivery required 30 days after awarding of the contract.

You can learn all about it here:

https://www.fbo.gov/index?s=opportunity&mode=form&id=7495ac616b40525dfbb5c9840a89a726

It does indeed make for interesting reading, and I thought it might be instructive to dig into the technical details a bit.

So here we go.

The requirement specifically is addressed to the disruption of commercially available personal drones. This appears to implicitly admit that self-built drones (built from easily available commodity parts as I noted above) may represent a more problematic target category.

In practice though, even commercially available drones will often be running altered and/or open source firmware, making their behavior characteristics less of a sure bet (to say the least).

A key attribute of the Drone Disruptor is that it be able to interfere with drone operator communications links in the 2.4 and 5.8 Ghz unlicensed bands.

These of course are the same bands used for Wi-Fi, and are indeed the most common locations for hobby drone comm links. (More advanced hobbyists also may control their drones through ground station links in the 433 Mhz and/or 915 Mhz bands, but who am I to tell anything to the Air Force?)

Another key bullet point of the solicitation is the ability to interfere with the GPS receivers that an increasing number of drones use for Return to Launch (RTL) functions, and for fully autonomous "waypoint" flights that can proceed without any operator comm link active.

All of this gets really, seriously complicated in practice, because any given hobby-class drone can behave in so many different ways (both planned and unplanned) when faced with the sorts of disruptions the USAF has in mind.

The cheapest variety are usually completely dependent on the comm link for flight stability. Jam or otherwise disrupt the link, and they'll usually go crazy and come crashing down.

It's a taller order if you want to actually take over control of such drones, since you need to have a compatible transmitter and a way to "bind" it to the receiver. Not impossible by any means, but a lot tougher, especially if a drone is unstable during the comm link attack process.

More sophisticated hobby drones can be programmed to do pretty much anything in the case of their comm link being interrupted or tampered with. They might be configured to just "loiter" in position, or more commonly to activate that RTL -- Return to Launch -- function that I mentioned (yes, handy if you want to trace a drone back to its point of origin).

But many hobby drones now include sophisticated GPS receivers and magnetometers (that is, electronic compasses) -- and sometimes more than one of either or both for flight control redundancy.

This is obviously why the USAF solicitation includes GPS jamming requirements (it doesn't mention anything about magnetometers).

Here again though, how any given drone will react to such interference is difficult to predict with any degree of accuracy, especially if it isn't running the firmware you presume it does (and we know even commercial drones with restricted firmware will be "rooted" and "jailbroken" to run "unapproved" firmware without restrictions, often by users just to prove that they could do it.)

For example, in the case of GPS disruption, a drone could be programmed to simply fly away as far as it can using its magnetometer references. Even without reliable magnetometer readings, a drone could execute a "dead reckoning" escape plan using only its internal electronic accelerometers and gyros (even cheap toy drones now usually include three of each to deal with the required calculations for stable flight in 3D space).

What's more, at lower altitudes, a small, $100 laser ranging ("LIDAR") system can provide another source of internal control data.

If you weren't already familiar with the field of modern hobby drones, your reaction to this discussion might understandably be something like, "Gee, I didn't realize this stuff had gotten so sophisticated."

But sophisticated it is, and becoming more so at a staggeringly fast rate.

The bottom line seems to be that while it's understandable that the USAF would wish for a portable magic box that can "shoot down" drones via radio jamming and other remote techniques, the ability of such a system to be effective against other than the "low hanging fruit" of less sophisticated hobby-class drones seems notably limited at best.

And that's a truth that all the "wishing on a drone" isn't going to change.

So if a drone shows up under your Christmas tree, please do us all a favor and fly it responsibly!

Merry Christmas and best for the holidays, everyone!

--Lauren--
I have consulted to Google, but I am not currently doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 03:10 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


December 21, 2015

A Proposal for Dealing with Terrorist Videos on the Internet

As part of the ongoing attempts by politicians around the world to falsely demonize the Internet as a fundamental cause of (or at least a willing partner in) the spread of radical terrorist ideologies, arguments have tended to focus along two parallel tracks.

First is the notorious "We have to do something about evil encryption!" track. This is the dangerously loony "backdoors into encryption for law enforcement and intelligence agencies" argument, which would result in the bad guys having unbreakable crypto, while honest citizens would have their financial and other data made vastly more vulnerable to attacks by black hat hackers as never before. That this argument is made by governments that have repeatedly proven themselves incapable of protecting citizens' data in government databases makes this line of "reasoning" all the more laughable. More on this at:

Why Governments Lie About Encryption Backdoors:
https://lauren.vortex.com/archive/001137.html

The other track in play relates to an area where there is much more room for reasoned discussion -- the presence on the Net of vast numbers of terrorist-related videos, particularly the ones that directly promote violent attacks and other criminal acts.

Make no mistake about it, there are no "magic wand" solutions to be found for this problem, but perhaps we can move the ball in a positive direction with some serious effort.

Both policy and technical issues must be in focus.

In the policy realm, all legitimate Web firms already have Terms of Service (ToS) of some sort, most of which (in one way or another) already prohibit videos that directly attempt to incite violent attacks or display actual acts such as beheadings (and, for example, violence to people and animals in non-terrorism contexts). How to more effectively enforce these terms I'll get to in a moment.

When we move beyond such directly violent videos, the analysis becomes more difficult, because we may be looking at videos that discuss a range of philosophical aspects of radicalism (both international and/or domestic in nature, and sometimes related to hate groups that are not explicitly religious). Often these videos do not make the kinds of direct, explicit calls to violence that we see in that other category of videos discussed just above.

Politicians tend to promote the broadest possible censorship laws that they can get away with, and so censorship tends to be a slippery slope that starts off narrowly and rapidly expands to other than the originally targeted types of speech.

We must also keep in mind that censorship per se is solely a government power -- they're the ones with the prison cells and shackles to seriously enforce their edicts. The Terms of Service rules promulgated by Web services are independent editorial judgments regarding what they do or don't wish to host on their facilities.

My view is that it would be a lost cause, and potentially a dangerous infringement on basic speech and civil rights, to attempt the eradication from the Net of videos in the second category I noted -- the ones basically promoting a point of view without explicitly promoting or displaying violent acts. It would be all too easy for such attempts to morph into broader, inappropriate controls on speech. And frankly, it's very important that we be able to see these videos so that we can analyze and prepare for the philosophies being so promoted.

The correct way to fight this class of videos is with our own information, of course. We should be actively explaining why (for example) ISIL/ISIS/IS/Islamic State/Daesh philosophies are the horrific lies of a monstrous death cult.

Yes, we should be doing this effectively and successfully. And we could, if we put sufficient resources and talent behind such information efforts. Unfortunately, Western governments in particular have shown themselves to be utterly inept in this department to date.

Have you seen any of the current ISIL recruitment videos? They're colorful, fast-paced, energetic, and incredibly professional. Absolutely state of the art 21st century propaganda aimed at young people.

By contrast, Western videos that attempt to push back against these groups seem more on the level of the boring health education slide shows we were shown in class back when I was in elementary school.

Small wonder that we're losing this information war. This is something we can fix right now, if we truly want to.

As for that other category of videos -- the directly violent and violence-inciting ones that most of us would agree have no place in the public sphere (whether they involve terrorist assassinations or perverts crushing kittens), the technical issues involved are anything but trivial.

The foundational issue is that immense amounts of video are being uploaded to services like YouTube (and now Facebook and others) at incredible rates that make any kind of human "previewing" of materials before publication entirely impractical, even if there were agreement (which there certainly is not) that such previewing was desirable or appropriate.

Services like Google's YouTube run a variety of increasingly sophisticated automated systems to scan for various content potentially violating their ToS, but these systems are not magical in nature, and a great deal of material slips through and can stay online for long periods.

A main reason for this is that uploaders attempting to subvert the system -- e.g., by uploading movies and TV shows to which they have no rights, but that they hope to monetize anyway -- employ a vast range of techniques to try prevent their videos from being detected by YouTube's systems. Some of these methods render the results looking orders of magnitude worse than an old VHS tape, but the point is that a continuing game of whack-a-mole is inevitable, even with continuing improvements in these systems, especially considering that false positives must be avoided as well.

These facts tend to render nonsensical recent claims by some (mostly nontechnical) observers that it would be "simple" for services like YouTube to automatically block "terrorist" videos, in the manner that various major services currently detect child porn images. One major difference is that those still images are detected via data "fingerprinting" techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds.

So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts?

I believe there are.

First -- and this would seem rather elementary -- video viewers need to know that they even have a way to report an abusive video. And that mechanism shouldn't be hidden!

For example, on YouTube currently, there is no obvious "abuse reporting" flag. You need to know to look under the nebulous "More" link, and also realize that the choice under there labeled "Report" includes abuse situations.

User Interface Psychology 101 tells us that if viewers don't see an abuse reporting choice clearly present when viewing the video, it won't even occur to many of them that it's even possible to report an abusive video, so they're unlikely to go digging around under "More" or anything else to find such a reporting system..

A side effect of my recommendation to make an obvious and clear abuse reporting link visible on the main YouTube play page (and similarly placed for other video services) would be the likelihood of a notable increase in the number of abuse reports, both accurate and not. (I suspect that the volume of reports may have been a key reason that abuse links have been increasingly "hidden" on these services' interfaces over time).

This is not an inconsequential problem. Significant increases in abuse reports could swamp human teams working to evaluate them and to make the often complicated "gray area" determinations about whether or not a given reported video should stay online. Again, we're talking about a massive scale of videos.

So there's also a part two to my proposal.

I suggest that consideration be given to using volunteer or paid, "crowdsourced" populations of Internet users -- on a large scale designed to average out variations in cultural attitudes for any given localizations -- to act as an initial "filter" for specific classes of abuse reports regarding publicly available videos.

There are all kinds of complicated and rather fascinating details in even designing a system like this that could work properly, fairly, and avoid misuse. But the bottom line would be to help reduce to manageable levels the abuse reports that would typically reach the service provider teams, especially if significantly more reports were being made -- and these teams would still be the only individuals who could actually choose to take specific reported videos offline.

Finding sufficient volunteers for such a system -- albeit ones with strong stomachs considering what they'll be viewing -- would probably not prove to be particularly difficult. There are lots of folks out there who want to do their parts toward helping with these issues. Nor is it necessarily the case that only volunteers must fill these roles. This is important work, and finding some way to compensate them for their efforts could prove worthwhile for everyone concerned.

This is only a thumbnail sketch of the concept of course. But these are big problems that are going to require significant solutions. I fervently hope we can work on these issues ourselves before politicians and government bureaucrats impose their own "solutions" that will almost certainly do far more harm than good, with resulting likely untold collateral damage as well.

I believe that we can make serious inroads in these areas if we choose to do so.

One thing's for sure though. If we don't work to solve these problems ourselves, we'll be giving governments yet another excuse for the deployment of ever more expansive censorship agendas that will ultimately muzzle us all.

Let's try keep that nightmare from happening.

All the best to you and yours for the holidays!

Be seeing you.

--Lauren--
I have consulted to Google, but I am not currently
doing so -- my opinions expressed here are mine alone.

Posted by Lauren at 11:51 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


December 14, 2015

Privacy Nightmare: Own a Drone? FAA Wants Your Credit Card Number

Oh goodie. The FAA has announced its ultra-rushed plan for a drone registry -- they desperately wanted to get this on the books before Christmas. It's worse than even the most vocal critics had anticipated:

https://www.faa.gov/uas/registration/faqs/

Over the next 60 days, the FAA is requiring that anyone who flies drones outside (other than very small toy drones) must register on a web site (in theory paper-based filing is possible, but the FAA obviously anticipates most registrations to be over the web).

The FAA is also demanding your credit card number before you fly. In fact, they demand $5 via credit card every three years. Forever.

Even though the signup fee is waived for the first 30 days after Dec. 21 this year, the government still requires your credit card number for "verification" purposes. And because, hey, government agencies can never have enough credit card numbers on file.

No need to worry though, right? All that required personal information -- name, physical/mailing address, credit card data, email address, etc. will be in the warm embrace of a "third party contractor" who no doubt will take really good care of it to meet the abysmal security and privacy practices of the federal government.

The black hat hackers are already salivating over this one. Home addresses! Credit cards! "Hey comrade, do they ship Porsches to Moscow?"

Speaking of privacy, the FAA discussion of the privacy practices for this massive new database of personal information can best be described as exceedingly vague. Clearly it will be searchable on demand by various entities. Who exactly? For what purposes? What can they then do with the information obtained? Who the hell knows?

My guess is that illicit credentials for accessing aspects of this database will be floating around the Net faster than you can say "Danger, Will Robinson!"

The FAA admits that "bad actors" -- you know, the "drone terrorists" we keep being warned about or irresponsible drone pilots -- aren't likely to accurately register or to register at all. But hey, $5 and a bundle of personal info from all the honest drone owners every three years is a pretty good haul anyway. And it makes the government look like it's doing something about drone safety when in reality their plan isn't likely to prevent a single drone accident (or attack).

This is government operating in its maximal disingenuous mode -- creating massive new problems instead of presenting realistic proposals for solving genuine existing problems.

But we expected no less.

Oh, there is some good news. The FAA says you don't have to register your Frisbee.

Now isn't that nice?

Be seeing you.

--Lauren--
I have consulted to Google, but I am not currently doing so.
My opinions expressed here are mine alone.

Posted by Lauren at 10:17 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein


December 13, 2015

Why Governments Lie About Encryption Backdoors

Despite a lack of firm evidence to suggest that the terrorist attackers in Paris, in San Bernardino, or at the Planned Parenthood center in Colorado used strong (or perhaps any) encryption to plan their killing sprees, government authorities around the planet -- true to the long-standing predictions of myself and others that terrorist attacks would be exploited in this manner -- are once again attempting to leverage these horrific events into arguments for requiring "backdoor" government access to the encryption systems that increasingly protect ordinary people everywhere.

This comes despite the virtual unanimity among reputable computer scientists and other encryption experts that such "master keys" to these encryption systems that protect our financial and ever more aspects of our personal lives would be fundamentally weakened by such a government access mechanism, exposing us all to exploits both via mistakes and purposeful abuse, potentially by governments and outside attacks on our data.

It's difficult -- one might say laughable -- to take many of these government arguments seriously even in the first place, given the gross incompetence demonstrated by the U.S. government in breaches that exposed millions of citizens' personal information and vast quantities of NSA secrets -- and with similar events occurring around the world at the hands of other governments.

But there are smart people in government too, who fully understand the technical realities of modern strong encryption systems and how backdoors would catastrophically weaken them.

So why do they continue to argue for these backdoor mechanisms, now more loudly than ever?

The answer appears to be that they're lying to us.

Or if lying seems like too strong a word, we could alternatively say they're being "incredibly disingenuous" in their arguments.

You don't need to be a computer scientist to follow the logic of how we reach this unfortunate and frankly disheartening determination regarding governments' invocation of terrorism as an excuse for demanding crypto backdoors for authorities' use.

We start with a fundamental fact.

The techniques of strong, uncrackable crypto are well known. The encryption genies have long since left their bottles. They will not return to them, no matter how much governments may plead, cajole, or threaten.

In fact, the first theoretically unbreakable crypto mechanisms reach back at least as far as the 19th century.

But these systems were only as good as the skill and discipline of their operators, and errors in key management and routine usage could create exploitable and crackable weaknesses -- as they did in the case of the German-used "Enigma" system during World War II, for example.

The rise of modern computer and communications technologies -- desktops, smartphones, and all the rest -- have allowed for the "automation" of new, powerful encryption systems in ways that make them quite secure even in the hands of amateurs, and as black hat hacking exploits have subverted the personal data of millions of persons, major Web and other firms have reacted by deploying ever more powerful crypto foundations to help protect these environments that we all depend upon.

Let's be very, very clear about this. The terrorist groups that governments consistently claim are the most dangerous to us -- al-Qaeda, ISIL (aka ISIS, IS, Islamic State, or Daesh), the less talked about but at least equally dangerous domestic white supremacist groups, and others -- all have access to strong encryption systems. These apps are not under the control of the Web firms that backdoor proponents attempt to frame as somehow being "enemies" of law enforcement -- due to these firms' enormously justifiable reluctance to fundamentally weaken their systems with backdoors that would expose us all to data hacking attacks.

What's more -- and you can take this to the bank -- ISIL, et al. are extraordinarily unlikely to comply with requests from governments to "Please put backdoors into your homegrown strong crypto apps for us? Pretty please with sugar on it?"

Governments know this of course.

So why do they keep insisting publicly that crypto backdoors are critical to protect us from such groups, when they know that isn't true?

Because they're lying -- er, being disingenuous with us.

They know that the smart, major terrorist groups will never use systems with government-mandated backdoors for their important communications, they'll continue to use strong systems developed in and/or distributed by countries without such government mandates, or their own strong self-designed apps.

So it seems clear that the real reason for the government push for encryption backdoors is an attempt not to catch the most dangerous terrorists that they're constantly talking about, but rather a selection of "low-hanging fruit" of various sorts.

Inept would-be low-level terrorists. Drug dealers. Prostitution rings. Free speech advocates and other political dissidents. You know the types.

That is, just about everybody EXCEPT the most dangerous terrorist groups that wouldn't go near backdoored encryption systems with a ten foot pole, yet are the very groups governments are loudly claiming backdoor systems are required to fight.

Now, there's certainly a discussion possible over whether or not massively weakening crypto with backdoors is a reasonable tradeoff to try catch some of the various much lower-level categories of offenders. But given the enormous damage done to so many people by attacks on their personal information through weak or improperly implemented encryption systems, including by governments themselves, that seems like an immensely difficult argument to rationally make.

So our logical analysis leads us inevitably to a pair of apparently indisputable facts.

Encryption systems weakened by mandated backdoors would not be effective in fighting the terrorists that governments invoke as their reason for wanting those backdoors in the first place.

And encryption weakened by mandated backdoors would put all of us -- the ordinary folks around the planet who increasingly depend upon encrypted data and communications systems to protect the most intimate aspects of our personal lives -- at an enormous risk of exposure from data breaches and associated online and even resulting physical attacks, including via exploitation from foreign governments and terrorist groups themselves.

Encryption backdoors are a gleeful win-win for terrorists and a horrific lose-lose for you, me, our families, our friends, and for other law-abiding persons everywhere. Backdoors would result in the worst of the bad guys having strong protections for their data, and the rest of us being hung out to dry.

It's time to permanently close and lock the door on encryption backdoors, and throw away the key.

No pun intended, of course.

Be seeing you.

--Lauren--
I have consulted to Google, but I am not currently doing so.
My opinions expressed here are mine alone.

Posted by Lauren at 10:35 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein