February 06, 2013

Blame Police for Mugshot Extortion, Not Google

In an impassioned opinion posting today, Jonathan Hochman argues that Google should take actions to effectively ostracize and censor the abominable "mugshot extortion" sites that have been popping up around the Web, who demand sometimes large fees to "remove" arrested persons' copied mugshot photos -- even for persons never charged or found guilty of any crimes.

His argument that these photos showing up in search results unfairly damage the reputations of many innocent individuals is valid.

However, his suggested agents for action -- search engines such as Google -- are not the appropriate focus in this very unfortunate situation.

We see again and again how tempting it is to try "blame the messenger" such as search results when indexed websites behave badly. But the reality is clear -- even if such search results are removed -- and the related slippery slope censorship problem can be enormous -- the actual sites in question still exist and their materials will find other ways to propagate around the Web.

The detailed calculus involved is not necessarily identical in the case of ad policy standards as compared with organic search results themselves, but the foundational issues are much the same -- and it's clear who should be held responsible, and that's not Google.

In fact, the real culprits are of course those mugshot sites themselves, in league with their enabling partners in these extortive activities -- law enforcement agencies throughout the United States.

And make no mistake about it -- law enforcement could stop most of this mess in a heartbeat if they really wanted to.

The underlying problem is the now common police practice of publishing -- and in most cases now that means placing online for easy archival and copying -- the mugshots and identifying data for persons merely arrested even for the most minor of offenses -- and often including people for whom charges are quickly dropped (and the persons released), or later found innocent as well.

In the past, when common practice was only to release such photos in cases of serious crimes, and when these images might only appear in a printed newspaper for a day or two, the situation was much less ripe for abuse.

But since going online, police departments are increasingly just dumping all manner of arrest photos and data onto the Net, with no regard for the potentially devastating impact on innocent persons' lives going forward.

We really should not be surprised by this turn of events. In many ways it's an outgrowth of another atrocious and unfair (and very common) police practice, the parading ("perp walk") of humiliated, shackled prisoners -- often not yet even tried for any crime much less found guilty -- in front of the media, both to try promote departmental efficiency and to poison any upcoming jury pools to be predisposed against the defendants.

The practice of mass dumping of mugshots and associated data onto the Internet is from the same mindset. "Look how many people we arrested last night! Look at these bad people we took off the streets! Gawd, we're great!"

Both of these abusive police practices are explicitly illegal in many countries.

As for the innocents in those mugshots -- the cops' actions show that they essentially could not care less.

There are a couple of ways to usefully attack this problem. To the extent that the mugshot extortion racket is technically legal, laws to change this state of affairs should be considered as soon as possible. But this could be tricky from a first amendment standpoint if law enforcement keeps throwing the images and data online publicly.

This suggests another approach.

Law enforcement's rapid release of arrest photos and associated data should be halted for all but the most serious of crimes at least. Such images and data related to accusations of minor crimes should not be released at all, or at a minimum after sufficient time has elapsed that only those arrests for which charges have actually been filed and not rapidly dropped are made available.

If you're arrested and you're released after charges are relatively quickly dropped or no formal charges are made, there's no valid reason for your arrest photo and data to be placed online to be abused in the first place.

But even if law enforcement later removed innocent parties' photos and data, what of the private mugshot extortion sites that mirror them?

While I am unenthusiastic in general about using copyright law for Internet takedowns, this may be a case where it can be of some actual help. If these innocent parties' arrest images and data were copyrighted by law enforcement, and law enforcement were willing to take actions against the private mugshot sites, this could provide a legal basis not only to try force removal of individual photos from those sites, but to undermine their evil business models entirely.

If law enforcement is unwilling to take such actions directly, they should be forced to assign the photo/data copyrights to the individual innocent persons arrested, so that these persons could then have some leverage for taking legal steps against these mugshot firms directly, perhaps on a class action basis.

Ultimately, this whole nightmare lands squarely at the feet of law enforcement, which leverages arrest photos of innocent people into political points, no matter who gets hurt in the process. If those photos and associated data were properly limited by the police and other officials in the first place, this entire mess would likely not exist at all.


Posted by Lauren at 12:23 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

February 04, 2013

Red Cat, Green Chair, Blue Square: A Security Experiment

Over the weekend I ran a quickie little "security experiment" on my public Google+ feed. Since I purposely kept the underlying rationale opaque, a lot of folks have been asking what the blazes I was up to. So rather than contacting everyone individually (both who participated -- thanks! -- or who just saw the experiment zip by their streams) here's the scoop, such as it is.

We all realize -- or should -- that conventional passwords are rapidly entering the "end days" of their usefulness. A chain of major site password mass security breaches, not to mention the constant buzz of individuals who suffer password compromises through phishing and other attacks, obviously point to fundamental flaws in most existing password regimes.

But getting out from under password systems is a serious challenge. Site access control can be integrally linked with extremely difficult and complex foundational identity management issues, and these rapidly descend into a complex mess of technology intertwined with law enforcement and political machinations.

Some attempts at "solving" this situation could actually makes matters far worse. For example, I am extremely skeptical of a current US federal government identity project -- entangled with Homeland Security and intelligence agencies -- that I feel could be subject to serious abuse both by private parties and government itself.

But even as we work toward acceptable identity solutions (which must also protect pseudonymous and anonymous access paradigms in appropriate circumstances), we need some shorter term methods to improve on the current password status quo as well.

One of these is so-called multiple factor (e.g. two-factor) authentication systems, that use a password in conjunction with a changing numeric or other codes tied to particular user access devices and/or applications. These codes can have varying expiration rates, be generated and deployed via portable hardware, software programs, smartphone apps, telephone calls, paper printouts or other methods.

The basic idea is that unless you know the password and also the currently valid authentication code -- particularly on a device or via a connection that you haven't used previously -- you are forbidden system access. There are numerous variations on this theme, including purely hardware-based constantly changing password systems, though even these have not always proven invulnerable to external attacks. Still, they're better than a simple password in the vast majority of cases.

Google has long offered optional two-factor authentication for most of their user accounts. More firms have been making this option available as well.

I've successfully introduced quite a few people to various optional two-factor authentication systems. I have been less successful at getting them to stay with such systems, however.

As the number of user devices and online apps increases, and the authentication code expiration times shorten, the hassle factor involved with re-authentication begins to notably climb, often to a level where many users simply don't want to deal with it any more, and disable it if possible -- returning to simple and vulnerable password access control.

It would be great if we could solve our fundamental access and identity issues related to the Internet. And we'd all be safer for now if everyone was using multiple-factor authentication.

But I was curious to see if any sort of middle ground might also exist between conventional passwords and typical multiple-factor access.

While most multiple-factor systems use some sort of "external" mechanism to generate password code sequences, there is another way to generate a sort of additional factor as well.

When you think about it, an advantage that the legitimate user of an access account has over a remote attacker is that in the vast majority of cases the legit user has previously been logged into the account, and the attacker has not.

So is there a way to leverage this fact to provide a bit more than standard password security?

Yes, and some of these are already in use. Typical "security questions" sometimes pushed at users may arguably fall into this category. First pet's name. Grandmother's name. First school. Or create your own question ...

This technique has value, but creates problems as well. Most people feel compelled to answer these questions honestly (or else, they perhaps reason, they'll forget the falsified answers), and there have been many cases where typical questions have been compromised across systems and in conjunction with other information sources.

Ideally, you want any additional "security question data" to be system generated, memorable to users, and unique from system to system, so that the compromise of a password (given the unfortunately common practice of people using the same password on multiple systems) may still be limited in terms of resulting effective authentication exposure.

And this finally gets us to my simple little weekend experiment.

On my Google+ stream, I first sent out -- without explanation other than labeling them as security images 1, 2, and 3 -- a simple graphic of a green chair, a red cat, and a blue square. I disabled comments on these postings to discourage public speculation.

A bit later in the day, I send out three screenshots of my Google+ home page, each with one of these small images superimposed in an otherwise empty area of the page, and now textually labeled beneath each graphic: GREEN CHAIR, RED CAT, BLUE SQUARE. Again, comments were disabled.

I refused to substantively respond to questions regarding what this was all about.

The next day -- yesterday -- I sent out a note asking anyone who had seen those images to please privately let me know what they remembered of those color/object pairs, and I asked for their honestly in not looking back on the stream.

I've gotten a pile of responses back and they're still been coming in. They've provided some really fascinating insight into what people remembered, what they've confused, and how these test images and labels interact in viewers' minds.

This was purposely made difficult. Not only did I send out multiple test pairs without any genuine explanation, I never even suggested that there was any reason to bother remembering them at all.

By now you've probably figured out the underlying purpose of this experiment.

I was curious as to how memorable these sorts of labeled images would be under obscure circumstances, toward analysis of their possible usefulness as a routine additional login access security factor.

For example, if a system (when you're logged in) routinely displayed a small labeled image of a red cat, and if when trying to login from an unfamiliar location you were asked to input your security image ("red cat") in addition to providing your password, would you remember the image? Could something like this be used as a default mechanism to provide some stopgap security beyond passwords for persons unwilling or unable to use true multiple-factor authentication?

It's clear that a single simple image can be quite memorable, but would users tend to ignore (and forget) them if they're routinely shown, and would confusion result between different images shown to users on different systems? How much additional security would such a system provide from external password attacks or compromises, particularly in shared password situations?

I can't answer these questions yet. Looking more deeply at these issues was why I conducted this experiment. But the results so far certainly look interesting to say the least.

So that's the story. Thanks again to everyone who participated or simply put up with the strangeness that passed through my Google+ stream over the weekend.

And remember -- the green chairs, the blue squares, and especially the red cats are on our side in the security battles!

Take care, all.


Posted by Lauren at 10:22 AM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein

February 01, 2013

Google, France, and the Extortion of the Internet

Recently, in How France Wants Us All to Pay Through the Nose for a Broken Internet, I expressed concerns over threats by the French government to financially sanction Google (and by extension, other Internet firms), in an effort to support increasingly obsolescent publishing models -- and in a manner that if widely adopted could literally spell the end of the World Wide Web and its open public linking model as we know it, to the severe detriment of the global Internet user community.

Today comes word that Google and France have agreed to Google's creation of a 60 million euro "Digital Publishing Innovation Fund" (and reportedly some ad-related revenue associated changes) to apparently settle -- for the moment -- France's demands, and (in theory, at least) to help transition French publishers toward more sustainable 21st century models.

This decision can be reasonably viewed as a short-term action (ending the current conflict with France) with laudable longer-term goals (helping French publishers move toward a more sustainable regime).

Yet while we can agree that the short-term benefits of this agreement are fairly clear, I am extremely dubious about its long-term advisability in terms of its impacts on Google, Google's users, and on the Internet itself.

The problem's scope should be obvious even to the most casual observer of history.

Whether we call it Tribute, Danegeld, or just plain blackmail and extortion payments, there is little evidence to suggest that "paying off" a party making unreasonable demands will do much more than quiet them for the moment, and they'll almost inevitably be back for more. And more. And more.

Even worse, caving in such situations signals other parties that you may be susceptible to their making the same (or even more outrageous) demands, and this mindset can easily spread from attacking deep-pocketed firms to decimating much smaller companies, organizations, or even individuals.

Let's be very clear. France's complaints regarding Google related to activities that are absolutely part and parcel of the fundamental and fully expected nature of the open Internet when dealing with publicly accessible Web sites, and pages not blocked by paywalls or limited by robots.txt directives.

France's success at obtaining financial and other concessions from Google associated with ordinary search and linking activities sends a loud, clear, and potentially disastrous message around the planet, a message that could doom the open Internet and Web that we've worked so long and hard to create.

Because if France can do this with Google, what's to stop France from the same modus operandi with other firms and sites -- or for other countries and entities to follow a similar course?

True, it's the largest firms and sites who are in the bullseye at the moment, but there is little reason to assume that the cancer of trying to extract fees from searching and linking of public sites won't spread widely down the food chain, in manners largely oblivious to whether or not any associated revenue at all is derived by the targeted sites and site owners.

It could be argued that most sites could simply refuse to pay such fees, and instead remove all links and search results relating to the parties demanding public website pay-to-play tribute fees.

In the long run though, this will destroy the open, public Web just as effectively, as connectivity and information exchange suffer a death of a thousand, a million, a billion cuts.

Back in early 2006, faced with Chinese government blocking, Google entered into an ill-fated agreement to provide censored search results to Chinese users. At the time, Google hoped that this would ultimately lead to more information for the Chinese people. After all, being able to at least get most search results would be better than getting no Google search results at all!

But as some observers predicted at the time, Chinese officials took this well-meaning compromise by Google as a signal to make ever escalating demands for more censorship and more control over Google's activities in China, ultimately leading to Google's termination of the agreement and withdrawal from a major scope of China-related activities.

While the situation with Google and France is obviously not identical to the Chinese saga, I am very concerned about seemingly similar underlying dynamics, with the potential to be widely damaging to the Internet and its users.

We must endeavor to resist government demands that effectively may hold the open Internet hostage. We must avoid whenever possible paying what amounts to extortion demands or watching the wondrous connectivity of the Web vanish link by link into walled gardens of greed.

I definitely do understand Google's dilemmas when faced with government demands of these sorts. And Google of course is quite rightly free to resolve these issues in whatever manners the involved parties feel are appropriate.

Increasingly, governments hunger to exploit and control the Web, no matter what the costs to freedom of information, open communications, and so much else that has made the Internet a wonder of the world. Unless we stand firm for what is right, we are all likely to be on their menu. All of us.


Posted by Lauren at 02:42 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein