Was Facebook Correct Blocking Video During Fatal Korryn Gaines Confrontation?

Many persons have been sending me materials relating to the death last week of 23-year-old Korryn Gaines during a violent police confrontation (in the process of serving a warrant) at her Baltimore area home. Of particular note in these messages has been Facebook’s decision to temporarily suspend her Facebook account about seven hours into the ongoing standoff when police asked Facebook to do so (her Instagram account was temporarily suspended as well).

Gaines had been recording videos of the confrontation and posting them as the standoff continued. Far more troubling were her followers, many of whom — in response to those videos — were apparently urging her not to comply with police and even suggesting aggressive actions against them.

Sometime after the accounts were suspended, police shot and killed Gaines, who was herself reportedly threatening police with a shotgun. Her 5-year-old child was also shot but is reportedly recovering.

The main reason I haven’t commented on this case publicly to date is that, frankly, I’ve been thinking about it and didn’t come to any immediate conclusions.

One way I try to analyze complicated Internet-related issues is to see if I can think of parallels in the “non-Internet” world that might shed some light on the matter.

Such parallels do exist in this case, and suggest that the most problematic aspect of the technology-related portion of this tragedy wasn’t the videos being posted per se, but rather the feedback Gaines was receiving from her followers in real time.

If we think about this situation in a non-Internet context — an angry confrontation, a suicidal person, or other similar scenarios — law enforcement would normally attempt to clear boisterous onlookers (“Go ahead, jump!” — “Shoot the pigs!”) from the scene, so that negotiations (in the case of Gaines, we’re talking more than seven hours) could proceed with some semblance of calm and without third parties attempting to escalate the situation for their own sordid jollies.

By these analogies, frustrated police in requesting the account suspensions were doing the social media equivalent of getting the yelling crowd away from the negotiation scene (which of course also has the effect of getting potential witnesses away from the scene, we must also note).

In this particular instance I feel that — overall — the police and Facebook/Instagram’s social media account actions perhaps were on balance justified, but that’s not the end of the story by any means.

We really need to often conceptually separate the videos themselves (being broadcast live over social media, or being posted in real time), from the live responses and comments that viewers of those videos are making back to the person in the confrontation itself, though this area is also very complicated.

For example, we’ve already seen cases of persons streaming live Facebook video to broadcast a suicide, and in another instance a rape. In such circumstances, it can certainly be argued that the videos alone are egregious enough to warrant blocking.

But it’s the instant feedback aspect of comments and chat dialogues — typically associated with live or posted videos — that seem the most problematic in ongoing confrontations, in the same manner as the crowd screaming for blood outside a physical building.

This all suggests to me that society, law enforcement, and the social media firms themselves would benefit in the long run from a more finely-grained set of tools to deal with these these kinds of events.

We can start with the given that cutting off a person’s social media accounts at the request of law enforcement should always be a last resort only to be used when absolutely required — not a first-order default decision.

But when the decision is made to take actions in this regard, there may be many instances where simply cutting off the feedback to the user rather than shutting down the videos and entire account may be more appropriate — the equivalent of getting the screaming crowd pushed back for a time so that negotiations can proceed with less chaos.

Would the user become angry or upset when they realized that the real-time feedback had ceased? Perhaps, but probably less angry or upset than they’d be if the entire account suddenly went dark.

We’re on the cusp of a vast explosion in the numbers of these kinds of situations in which social media will play important, even crucial roles. Today the policies and tools for dealing with these events appropriately are either too primitive and coarse, or simply don’t really exist at all.

We have a lot of work to do.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

As We Age, Smartphones Don’t Make Us Stupid — They’re Our Saviors

(Original posting date: 16 March 2015)

Throughout human history, pretty much every development or invention that increased our information storage and management capabilities has had its loud and voracious naysayers.

Around 370 BCE, both Socrates and Plato were already badmouthing the written word as inherently inferior to in-person verbal dialogue. The printing press, typewriter, telegraph, telephone, and Internet have all been targeted as the presumed bringers of universal intellectual decay.

So it comes as no surprise that when Web search engines appeared on the scene — to organize Internet-based information and make it widely available — much the same tired old attack arguments were trotted out by the usual suspects, in the form of multitudinous “Google Is making Us Stupid!” articles and similar varieties of vacuous commentaries.

The crux of most arguments against having quick access to information seem to largely parallel the attempts not that many years ago (and in some venues, still continuing) to routinely ban calculators from physics and other similar subject tests, on the grounds that not doing the math by hand was somehow — perhaps in a moral judgment “You’ll go to hell!” kind of sense — horribly cheating.

But unless the test you’re taking is specifically one for mathematical skills, the rote manual calculation process is practically worthless compared with developing the necessary skills to actually analyze a problem and determining appropriate methodologies for reaching correct answers. Even a specific answer itself may often be far less relevant in many contexts than development and analysis of appropriate problem solving processes.

One wonders how many potentially brilliant would-be physicists with wonderful analytic skills were sidelined into other professions simply due to not having a knack for manual math.

With the rise of the mobile Net comes the latest incarnation of this twisted saga, the “Are smartphones making us stupid?” meme. There seems to be a new version of this one somewhere pretty much every few days.

In a very real way the term “smartphone” in this context is being used by detractors largely as a proxy for saying “Portable Google” — as a wireless retread of search engine criticisms.

However, in this case the critics are even farther off the mark than usual, because smartphones not only don’t reduce our intelligence, they can be our saviors as we age.

Physiological studies show that our memory for much specific data usually begins to decline at the ripe old age of — 20. Yeah, pretty depressing. But in contrast, our reasoning and analytic skills can in many cases continue developing throughout our lives without limit, as we integrate ever more experiences into the mix.

And here is where the smartphone (along with the vast information ecosystem that supports it) really becomes something of a technological miracle.

For there on your belt or in your purse is a little box that can act as an almost limitless adjunct to your own memory, to your own brain.

Type on it, talk to it. Ask it questions, note its reminders. Smartphones can provide us with very much the exact kind of information that our brains gradually become less adept at recalling past age 20 or so.

To argue that it’s somehow wrong, somehow cheating or unethical or unnatural, to use these devices and their supporting infrastructures in this way, is itself as dumb and stupid as forcing a potentially brilliant future physicist to drop out of school because you wouldn’t let them use a calculator.

Obviously, for smartphones to be most useful at all ages, issues of accessibility become paramount — matters for ground-up consideration, not after-the-fact excuses. Input and output methodologies, font sizes and contrast, all become especially important, since our vision typically begins to decline at the same young age as our memory. These are all relatively straightforward user interface design issues though, given the will to deal with them appropriately.

It would probably be a pretty tough slog to get Plato comfortable with smartphones. On the other hand, he’s quoted as saying: “We can easily forgive a child who is afraid of the dark; the real tragedy of life is when men are afraid of the light.” And especially when it comes to smartphones and the immense value they can bring to us throughout our lives, only a fool would argue with Plato about that.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Confirmed and Unacceptable: Social Security Administration Cutting Off Users Who Can’t Receive Text Messages

UPDATE (14 August 2016): I’m told that SSA has removed the mandatory cell phone text messaging access requirement that was strongly criticized in the original posting below. I appreciate that SSA has now done the right thing in this case. Perhaps in the future they’ll think these things through better ahead of time!

– – –

If you don’t have a cell phone, or some other means to receive SMS text messages (and have them enabled, and know how to deal with them), you won’t be able to access your Social Security Administration “My Social Security” online account starting next month.

The SSA is currently sending out emails announcing that SSA online users MUST receive an SMS text message with a two-factor authentication code to access their accounts starting in August.

UPDATE (29 July 2016): Here is the official SSA announcement.
UPDATE (14 August 2016): SSA has now deleted this referenced announcement page since they have removed the mandatory cell phone text messaging login requirement, as noted in the update at the start of this posting.

According to Congressional testimony in May, SSA “expects” to make other two-factor methods available at some point in the future.

While the “expectation” of additional two-factor options at some unspecified time down the line is interesting, the move to now block users who do not have cell phones, or text message capable cell phones, or do not have text messaging enabled, or do not know how to access and read text messages — IS UNACCEPTABLE, especially on such short notice to SSA users.

Two-factor authentication systems are very important, but keep in mind that SSA by definition is dealing mostly with older users who may have only recently become comfortable with online services, and may not make any use of text messaging. Many do not have cell phones or somebody to receive text messages for them. There are also many people living in rural areas where cell phone service simply is not available at all!

Additionally — and ironically — text messaging is considered to be a substandard means of receiving two-factor authentications. And — get this boys and girls — NIST (the USA’s National Institute of Standards and Technology) — just a few days ago officially declared that text messaging based two-factor should no longer be used at all — it’s simply not safe and secure. The possibility of crooks leveraging this SSA text messaging system with fake messages targeting this particularly vulnerable user population is also very real.

It appears that SSA has really mucked this one up. This isn’t secure two-factor, it’s a three-ring circus. And it’s going to leave many SSA users out in the cold.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

How Some ISPs Could Subvert Your Local Network Security

When most Internet users think about the security and privacy of their communications, they tend to think mainly about the associated practices of the sites they visit on the Net. Rarely do they think much about their ISPs in this regard, even though by definition the ISP has access to the entirely of their communications usage over that ISP (we can assume that in most cases this does not include the ability to read encrypted, e.g. SSL/TLS data, though man-in-the-middle attacks on that secured data are not at all impossible).

But have you ever thought about how the practices of your ISP might affect the security of your local network — and data that (at least ostensibly) never leaves the confines of your local net?

Though best security practices include running your own routers and firewalls (if not even more secure systems using FIDO security keys or other similar advanced technologies) the truth is that most consumer and small business users who run local nets (that is, communications between some number of local machines at their site or sites) depend on the firewalls and security mechanisms configured into ISP-provided modems.

The thing is that you’re often not the only one in control of those modems.

Leased cable or other wireless or wireline data modems typically provide the ability for the ISP to control and configure the modem remotely. Even if you buy an approved modem on your own when that’s permitted, network provisioning and maintenance/support requirements may still permit your ISP a great deal of control over the device.

Another truth is that most consumers and organizations tend to run rather lax security (if any at all) behind what they assume to be secure modem firewalls, meaning that if that firewall is breached, their local net is pretty much wide open.

In an ideal world, we could all employ methodologies similar to Google’s excellent BeyondCorp security model, which puts a well-deserved nail in the coffin of firewalls. Unfortunately, this usually isn’t practical for most non-techie consumers.

Fundamentally, the question boils down to this — can your ISP remotely change modem configurations that could give them or third parties inappropriate access to data on your local network?

For example, some ISPs now provide the means for customers to reconfigure the Wi-Fi on their modems via the ISP’s website. In the case of Time Warner Cable (aka Charter, Spectrum, or whatever they’re called this week), their site allows users to view and change Wi-Fi passwords, change or even disable Wi-Fi security completely, and more.

Handy? Yeah. But what happens if TWC’s super-deluxe website gets hacked? Or perhaps law enforcement or intel agencies come around and want to use loopholes in the laws to try access your local network data without your even knowing about it?

You can see the problem. If your local net has typically lax security, and you don’t have your own firewall downstream of that ISP modem, the modem Wi-Fi security could be disabled remotely, your local network sucked dry late one night, and security restored by the morning. You might not even have a clue that any of this occurred.

How often does this kind of scenario occur in practice? I have no way to know. But it’s clearly possible.

Luckily, this is a case where there are steps you definitely can take to minimize these risks.

First, make sure that your local network is internally as secure as possible. You can’t simply assume that just because a machine is on your local network with a local IP address that it necessarily is a friend!

Second, consider putting your own firewall downstream of the ISP modem. Routers/switches with this capability are plentiful and relatively inexpensive.

Third, consider not using the ISP modem Wi-Fi at all. Those routers I mentioned just above often have their own built-in Wi-Fi that you can configure, making it unnecessary to use the ISP modem Wi-Fi, and permitting a more comprehensive firewall under your complete control.

I’m not suggesting that you go into a panic and start ripping Ethernet cables out of the walls or cease using Wi-Fi. But it would be wise to start thinking now about how you can reconfigure your local network for maximal security in a world of expanding network security concerns.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

The Sensible Safeguards Needed Now for Pokémon GO

Unless Pokémon GO turns out to be a relatively short-lived popular phenomenon (and actually even if it is, since PoGo will be but the progenitor of many future augmented reality games and other applications) it appears likely that the full real world impacts of the game were seemingly not completely considered before launch, leading to a growing collection of alarming situations.

There were signs of some sloppiness from the outset, when it was noted that the PoGo iOS app was asking for far more account permissions than was appropriate. The actual privacy risk in this case was minimal, but the mere fact that the app got out the door this way — given the intense concerns about app permissions generally — suggested a possible lack of due diligence in key respects.

While various of the problematic reports we’ve seen about PoGo can be chalked up to user inattention (plowing a car into a tree, driving off a cliff, etc.), many others cannot be blamed on the users alone, per se.

To note but a sampling, these include PoGo being used to attract players to be robbed, a registered sex offender who was supposed to stay away from children using the game to partner with a young child, and very recently, two players who were shot at by a homeowner when they were prowling a residential neighborhood at 1 AM. An array of other trespass-related occurrences have been noted, including players entering restricted areas at a nuclear power plant.

Of broader impact is the swarming of neighborhoods, parks, and other public places by far larger numbers of people than they were designed for — or that local authorities are prepared for — at all hours of the day and night. There are serious public safety concerns involved.

Such gaming activities become especially inappropriate when they occur at locations that are utterly unsuitable for gaming, like ordinarily quiet and respectful cemeteries and Holocaust museums.

Fans of PoGo enthusiastically declare that it’s a great way to meet new people and get exercise. Perhaps. In some locales at least, it seems that players are mostly driving around in their cars to reach designated targets, but we’ll let that pass for the moment.

One suspicion that’s difficult to shake is that seemingly there wasn’t much (if any?) attention given to purging inappropriate locations from PoGo’s ancestor game — Ingress — before deploying them in PoGo. The need for such a purge should have been obvious, given that PoGo would have been reasonably expected to attract far more users than Ingress (as it indeed dramatically has) and would also be far more attractive to children.

Historical side note: Ingress was originally developed at Google (in fact, I was one of its earliest players, I believe while it was still in beta), then spun off to a separate company — Niantic — in which Google holds a major stake.

As I noted above, PoGo is but the beginning of what will certainly be a long line of innovative and important augmented reality mobile apps. And that makes getting the real world implications of this tech in line with real world requirements and impacts as quickly as possible — without stifling innovation.

The most important requirement is to give more control to municipalities and persons who are impacted by these applications and their users.

For example, it doesn’t exactly take rocket science to figure out that sending users wandering around quiet residential areas in the middle of the night is a recipe for potentially dangerous (even lethal) confusion and confrontations, or that flooding a small park with thousands of people at once — without prior warning to local authorities — can easily lead to serious problems.

Niantic needs to immediately work toward providing much better mechanisms for involved homeowners, business owners, municipalities, and other associated entities, to request removal of specific locations from the PoGo location database (much as you can request removal of locations from Google Street View currently). And there should be ways to specify PoGo app operation “curfews” for specific locales as well — especially in residential neighborhoods, or areas with special concerns about the safety of late night visitors.

It is also crucial that accessing this kind of request/control system not require use of the PoGo app itself, nor ideally use of the Internet in any way — given that many affected persons may not even have Internet access. 

Obviously, different areas, regions, and countries will have their own individual attitudes and concerns about participation in the PoGo ecosystem, and we can reasonably expect the sorts of location removal and/or Pogo app curfew requests received to vary widely around the globe.

But it is not appropriate for these decisions to be made wholly by Niantic alone. And unless they and we get a handle on the real world impacts of augmented reality apps in short order, you can be sure that politicians — already expressing concerns about this area — will be moving in with their own “control ideas” — that will likely not be of the form that many of us would want, nor that would protect innovation going forward.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

The Coming Government Showdown over Live Video Streaming

Over the last few days, we’ve dramatically seen the force of Internet live video streaming, and the obvious hints of policy battles to come regarding this powerful technology are clearly emerging.

Beyond the tragic images of a man shot to death by police in his car, and then live scenes of a sniper in Dallas who ultimately killed five officers, we’ve already seen other ugly shadows of what might become the new normal, including a streamed rape and suicide — both streamed by the perpetrators themselves for maximal publicity.

And yes, this is only the beginning. For while it has been possible to stream live video from portable devices since years ago, only now has the concept reached a critical mass, an “inflection” point where it is likely to have enormous impact on society at large.

While most of the attention to date has been on Facebook’s video streaming app, Google and other firms also have live streaming services, and that number can only be expected to grow for the foreseeable future, around the world.

Notably, these streaming systems typically include the means for viewers to comment live back to the video originators during the streams themselves, to do everything from expressing admiration or condemnation, to “simply” urging them on.

The positive public interest and probative value in the streaming of many public events is fairly obvious in most cases.

But even in the public space the associated dilemmas are vast.

Unfortunately, large audiences can bring out the worst in some people, and there is an enormous range of potential abuse for this technology in an ecosystem of unfiltered live streaming — in terms of risk-taking behavior to please your streaming audience, encouraging violence (either explicitly or implicitly), privacy attacks, and other abuses.

Even when no harm is actually intended, the mere fact of a live streamed dramatic event with a significant viewership will in some situations lead to potentially dangerous “flash crowds” as nearby viewers rush to participate in person.

And while these risks exist aplenty even with streaming from public places, the potential problems likely multiple by orders of magnitude when we consider live video streaming from private homes or businesses, perhaps by surreptitious means.

The bottom line is that live video streaming is a quintessential tool. It can be used for enormous good that could greatly enhance public knowledge and participatory democracy. It can also provide a morbid audience and incentive for hideous monsters (including both individuals and groups) whose real world streamed depravities could make fictional “torture porn” films pale by comparison.

So we find ourselves facing a familiar dilemma. If live video streaming firms don’t do the hard policy work required to provide reasonable controls over and filtering of this content, we can be sure that governments around the world — both of their own volition and pressured by their citizens — will move forcefully to enact control and censorship regimes to meet their perceived agendas.

And history tells us that once that kind of censorship takes hold, it’s extremely difficult to stop from spreading in all directions.

This makes it more imperative than ever that we move forward toward establishing best practices and policies to harness this uber-powerful technology in a reasonable manner, before governments move in with possibly knee-jerk “solutions” that will almost certainly make matters worse, not better.

I don’t claim to have any magic wands available for addressing these complicated issues, though my gut feeling is that we should be able to harness the enormous crowdsourcing power of the Net to rapidly categorize streams in real time and trigger filtering or other actions as appropriate.

But just sitting on our hands about this is not a viable option. That is, unless our goal is to see an incredibly useful technology being branded as “the enemy” just as it’s really beginning to flower.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

How Ancient Monopolies Keep You from Getting Decent Internet Service

Many of us tend to assume that here in U.S. we have the most advanced technologies on the planet. So it may be startling to learn that by global Internet standards, numerous experts consider us to be living in something of a Stone Age Internet nation.

The reality is stark. Many countries in the world pay far less for their Internet services than we do, and get much faster and more reliable services in the bargain. While many countries have set a national goal of fiber optics directly connecting every home and business, here in the United States phone companies still are arguing that snail’s pace Net connections should qualify as broadband.

Even when relatively “high” Internet access speeds are available via cable, they tend to be mainly in the downstream direction. For example, I have the highest cable modem speed available in my location here in L.A., which is 300 Mb/s downstream — but only 20 Mb/s up. Obviously, high upstream speeds are important for a range of applications (not just limited to obvious ones like remote data backup). Cable modem speeds are getting better, but the fundamentals of cable system technology continue to dampen upstream speeds.

You might reasonably ask how so many other countries have been able to get much better Internet access to their residents, compared with us here in the country that invented the Internet.

The detailed reasons are complicated technically, legally, and very much politically, but the bottom line is that the Internet ecosystem here in the U.S. has long been rigged against effective competition, a direct outgrowth of early telecommunications monopoly environments.

One example of this may be visible right outside your window.

Have you ever wondered who owns those “telephone” poles throughout your community, or the underground cables and conduits in some towns?

The short answer is: What a mess!

Poles may be owned by power companies, by phone companies, by cable companies, or in some cases by communities themselves — or various combinations thereof. 

The land that these poles are planted in typically is in the form of an “easement” — a specific area of land still owned by the main property owner, but with access and other rights granted by government to various utilities and other firms. It works basically the same way with underground cables and conduits.

As you might imagine, easements can be the subject of complex and varied legal entanglements and disputes, even though most are granted when housing or commercial developments are being initially planned.

But for the sake of our discussion here right now, the most interesting aspect of easements is in older communities (for example, areas built up prior to the AT&T divestiture of 1984).

History matters in this context (as in so many other aspects of life) because when these easements were granted to communications companies back in the day, they were usually “monopoly” grants. That is, while we would probably agree even now that assuming a single water and/or power company would be logical, those historic easements were usually assuming only a single communications (phone) company, or later the original incumbent phone company plus a single cable TV company.

This is incredibly relevant today, because the entities controlling these easements, and that usually own the poles, cables, and conduits that everyone must use to provide landline communications services to homes or businesses, are quite powerfully in the catbird seat.

Here’s why.

In many countries, governments have national Internet plans that provide for robust competition in various ways. But here in the U.S., if you want to bring — for example — high speed fiber Internet to a community, you often have to deal with the incumbent telecom or other utility firms to gain access to those poles and/or underground facilities.

And those firms — like AT&T, Verizon, and the rest of the gang you likely are familiar with — have very little incentive to be particularly cooperative with new competitors bringing in far better services. In fact, the old guard firms have frequently pushed through laws — and/or filed lawsuits — aimed at preventing communities from encouraging or even permitting such competition.

So we find it not uncommon for the incumbents to demand exorbitant “pole attachment” or other access fees, or to delay and obfuscate as long as possible.

It’s important to remember that these incumbent firms typically only control these access assets because of those original monopoly grants from many decades ago — giving them exclusivity that is nonsensical and unfair so many years later. But they’ve become experts at milking every last possible dollar out of the jolly old monopoly days, even now!

If this sounds bad, it gets worse for the captive residents of many apartment buildings and commercial developments.

Building owners and landlords frequently view Internet access as a massive personal profit center, and engage in restrictive shenanigans — some of which can be viewed as illegal — to strike lucrative, and yes, monopoly deals with telecom firms, demanding sweetheart payments for access to their tenants, and treating those tenants as if they were medieval serfs. For more on this particularly seamy side of Internet access, please see Susan Crawford’s excellent recent article: “Dear Landlord: Don’t Rip Me Off When it Comes To Internet Access — When building owners get kickbacks from big providers it’s the tenants who lose.”

You might think that this sorry state of affairs would be pretty much obvious to everyone, but in our toxic political environment that would be very far from the truth.

In fact, there are many in Congress who don’t see any consumer problems here at all. Whether or not one chooses to consider these access issues under the “network neutrality” umbrella, many politicians who have long enjoyed the “generosity” of the incumbent telecom firms are lined up to block any attempts to improve the competitive landscape for Internet consumers, thereby condemning us to continued laughingstock status in the eyes of most other countries.

We do have some power though — in the voting booth. These issues tend to have local, state, and often federal components, and we’re unlikely to see significant improvements while lapdog beneficiaries of dominant Big Telecom remain in political control.

Or perhaps you’re satisfied with exorbitant prices and “Flintstones-class” Internet access throughput. Frankly, this far into the 21st century, I strongly believe that we can do much better than having so many of us running at bare feet pedal power Internet speeds.

Yabba dabba doo!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

A Rare Endorsement: When the Disk Drive Fails

As a matter of policy, I almost never make commercial endorsements. But I’m making an extremely rare exception today, because I feel that this particular firm may be able to save a whole lot of people a whole lot of grief.

Recently, I was called upon to help deal with a disk failure situation that rapidly appeared to be fairly hopeless. The disk in question was in a laptop (Linux ext3 format filesystem), and it had failed suddenly and hard — very hard. None of my usual tricks could revive it.

There were some backups, but all the ones less than a year old turned out to be largely useless due to corruption (I have a few choice words to say about “tar” that I’ll save for another day).

I found myself researching disk recovery firms — an area that I had never had occasion to examine in depth before. I quickly discovered that this category of business is replete with come-ons and abusive gimmicks.

For example, the promoted prices you see for most recovery firms typically bear no resemblance to reality except in the simplest of cases (e.g., when it’s just a matter of using software to undelete “deleted” files from a FAT32 disk).

But if the drive has actually suffered hardware damage (e.g. to the circuitry or platters) the price will typically skyrocket astronomically.

I also quickly discovered that most of the firms claiming they had “local” offices in big cities actually could only do those software-level operations there. Any hardware issues meant them shipping the drives to some central lab — meaning more delays.

In the midst of this rather gloomy research I stumbled across a firm called $300 Data Recovery.

Now if you’re like me, you’re a bit wary of firms that promote a price in their name. Hell, think of the expensive rebranding if your prices go up! But I was intrigued by their pricing chart, and by the fact that they were local to me here in L.A.’s San Fernando Valley, just a 20 minute or so shot straight down the 101. I like dealing with local firms when I can, since that means I can show up in person “to chat” if something goes wrong.

It’s a very small firm in a rather strange location for such an operation. There are just a handful of employees — various of my routine email communications were directly with the owner. It’s on the second floor of a strip mall on Ventura Blvd. in Studio City, surrounded by typical strip mall businesses about as logically far away from data recovery as you can imagine.

Yet they apparently do all their work in-house right there, including in their own clean room when necessary. When I took the bad disk over there, I was amused by their collection of horrifically failed open disk drives in the waiting area.

You can read the details of their pricing structure on their site, but the bottom line is that for most common situations, for a single disk already removed from a computer, they charge a flat $300 (for up to 2TB disks) if they can recover the data, otherwise — nothing. If there’s limited partial recovery, you can choose whether or not you want to pay that fee for what they recovered, based on a comprehensive list of recovered files that they email you. If you want priority service to go to the head of the queue and have them start working on a disk immediately, there’s a nonrefundable $50 up front, and an additional $150 if you accept the recovered data. So, in that case, the total is $500 — still a serious bargain. This doesn’t include shipping or the cost of a transfer drive — both quite reasonable and in my case avoided since I drove out to them and provided my own transfer drive for the recovered data.

They also handle larger disks and RAID arrays for additional (but still utterly reasonable) fees. And they happily execute data recovery nondisclosure agreements. By the way, they can target particular files for recovery by name if you wish, and can put crucial recovered files online in a secure location if you need to download them immediately.

In many cases they apparently can recover data in a day or two once starting work on a given disk. The disk I took them had to run in the “cloning” stage for more like five days to recover the maximum amount of data from crash-damaged platters, then another day or so for filesystem reconstruction. They keep you informed by email all through this process and respond virtually immediately to queries.

In the case of this particular disk, they ultimately recovered more than 99% (99.38% notes Mr. Spock) of the data! This doesn’t mean you get 99.38% of all the files back perfectly intact of course — since that remaining 0.62% can be scattered across the filesystem in various ways. They return the data in several different formats — recovered filesystem section with full hierarchy data, orphaned files without names and/or directories, and so on. “Grep” is very useful to locate specific files in those latter cases, of course, though the key files I was tasked to find were quickly located in the filesystem recovery section with their directory structures intact.

In fact, all important files (including some crucial databases) were recovered from that disk, nearly all 100% complete.

I’d call that a success by any measure.

I’ll add in passing that there seem to be some haters of this company out there, including somebody who bothered to build a whole site dedicated to trashing them — claiming they’re incompetent, that they pay people to give them good reviews, and other rants. I don’t know what the authors of that site are going on about — I can only speak to my own experience. When I got the recovered data back, it was noted that a positive review would be appreciated, but I certainly wasn’t offered any money or other compensation for one — nor would I have accepted such in any case. Nobody paid me nuttin’ for this blog posting!

As for their competency, I handed them a trashed, useless disk drive, they handed me back a disk loaded with important recovered data that was needed to fulfill an important mission — and at a completely reasonable price. That’s good enough for me! I know I would take any disk drive of mine to them in similar situations.

So if you’re ever in a tough spot with a failed disk, you might very well want to check these guys out — again, that’s $300 Data Recovery.

When I was getting ready to leave their shop with the recovered data disk, I mentioned to them that I appreciated all their great work and would enthusiastically recommend them — but frankly, I hoped to never have to deal with them myself again in a professional capacity!

It was clear from their smile that they knew exactly what I meant.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Die Passwords! Die!

(Original posting date: 31 May 2013)

In one form or another — verbal, written, typed, semaphored, grunted, and more — passwords broadly defined have been part of our cultures pretty much since the dawn of humans at least. Whether an 18 character mixed-case password replete with unusual symbols, or the limb-twisting motions of a secret handshake, we’ve always needed means for authentication and identity verification, and we’ve long used the concept of a communicable “secret” of some kind to fill this need.

As we plow our way ever deeper into the 21st century, it is notable that most of our Internet and other computer-based systems still depend on the basic password motif for access control. And despite sometimes herculean efforts to keep password-based environments viable, it’s all too clear that we’re rapidly reaching the end of the road for this venerable mechanism.

That this was eventually inevitable has long been clear, but recent events seem to be piling up and pointing at a more rapid degeneration of password security than many observers had anticipated, and this is taking us quickly into the most complex realms of identity and privacy.

Advances in mathematical techniques, parallel processing, and particularly in the computational power available to password crackers (now often using very high speed graphics processing units to do the number crunching) are undermining long held assumptions about the safety of passwords of any given length or complexity, and rendering even hashed password files increasingly vulnerable to successful attacks. If a single configuration error allows such files to fall into the wrong hands, even the use of more advanced password hashing algorithms is no guarantee of protection against the march of computational power and techniques that may decimate them in the future.

What seems like an almost daily series of high profile password breaches has triggered something of a stampede to finally implement multiple-factor authentication systems of various kinds, which are usually a notch below even more secure systems that use a new password for every login attempt (that is, OTP – One-Time Password systems, which usually depend on a hardware device or smartphone app to generate disposable passwords).

As you’d imagine, the ultimate security of what we might call these “enhanced password” environments depends greatly on the quality of their implementations and maintenance. A well designed multiple factor system can do a lot of good, but a poorly built and vulnerable one can give users a false sense of security that is actually even more dangerous than a basic password system alone.

Given all this, it’s understandable that attention has now turned toward more advanced methodologies that — we hope — will be less vulnerable than any typical password-based regimes.

There are numerous issues. Ideally, you don’t want folks routinely using passwords at all in the conventional sense. Even relatively strong passwords become especially problematic when they’re used on multiple systems — a very common practice. The old adage of the weakest link in the chain holds true here as well. And the less said about weak passwords the better (such as “12345” — the kind of password, as noted in Mel Brooks’ film “Spaceballs” — that “an idiot would have on his luggage”) — or worse.

So, much focus now is on “federated” authentication systems, such as OAuth and others.

At first glance, the concept appears simple enough. Rather than logging in separately to every site, you authenticate to a single site that then (with your permission) shares your credentials via “tokens” that represent your desired and permitted access levels. Those other sites never learn your password per se, they only see your tokens, which can be revoked on demand. For example, if you use Google+, you can choose to use your Google+ credentials to access various other cooperating sites. An expanding variety of other similar environments are also in various stages of availability.

This is a significant advance. But if you’re still using simple passwords for access to a federated authentication system, many of the same old vulnerabilities may still be play. Someone gaining illicit access to your federated identity may then have access to all associated systems. This strongly suggests that when using federated login environments you should always use the strongest currently available practical protections — like multiple-factor authentication.

All that being said, it’s clear that the foreseeable future of authentication will appropriately depend heavily on federated environments of one form or another, so a strong focus there is utterly reasonable.

Given that the point of access to a federated authentication system is so crucial, much work is in progress to eliminate passwords entirely at this level, or to at least associate them with additional physical means of verification.

An obvious approach to this is biometrics — fingerprints, iris scans, and an array of other bodily metrics. However, since biometric identifiers are so associated with law enforcement, cannot be transferred to another individual in cases of emergency, and are unable to be changed if compromised, the biometric approach alone may not be widely acceptable for mass adoption outside of specialized, relatively high-security environments.

Wearable devices may represent a much more acceptable compromise for many more persons. They could be transferred to another individual when necessary (and stolen as well, but means to render them impotent in that circumstance are fairly straightforward).

A plethora of possibilities exist in this realm — electronically enabled watches, bracelets, rings, temporary tattoos, even swallowable pills — to name but a few. Sound like science-fiction? Nope, all of these already exist or are in active development.

Naturally, such methods are useless unless the specific hardware capabilities to receive their authentication signals is also present, when and where you need it, so these devices probably will not be in particularly widespread use for the very short term at least. But it’s certainly possible to visualize them being sold along with a receiver unit that could be plugged into existing equipment. As always, price will be a crucial factor in adoption rates.

Yet while the wearable side of the authentication equation has the coolness factor, the truth is that it’s behind the scenes where the really tough challenges and the most seriously important related policy and engineering questions reside.

No matter the chosen methods of authentication — typed, worn, or swallowed — one of the most challenging areas is how to appropriately design, deploy, and operate the underlying systems. It is incumbent on us to create powerful federated authentication environments in ways that give users trustworthy control over how their identity credentials are managed and shared, what capabilities they wish to provide in specific environments, how these factors interact with complex privacy parameters, and a whole host of associated questions, including how to provide for pseudonymous and anonymous activities where appropriate.

Not only do we need to understand the basic topology of these questions and develop policies that represent reasonable answers, we must actually build and deploy such systems in secure and reliable ways, often at enormous scale by historical standards. It’s a fascinating area, and there is a tremendous amount of thinking and work ongoing toward these goals — but in many ways we’re only just at the beginning. Interesting times.

One thing is pretty much certain, however. Passwords as we’ve traditionally known them are on the way out. They are doomed. The sooner we’re rid of them, the better off we’re all going to be.

Especially if your password is “12345” …

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

No, I did not have a project where I “released birds” over the ARPANET

So I get this email from a researcher asking me about my project to release birds over satellites, and ARPANET, and what he called “UCNET” — and I’ll admit the initial message was puzzling. I don’t recall any significant bird release projects in my past. After a little more back and forth, I figured out what was going on — he had somehow conflated several of my past projects in a very amusing manner. However, this got me thinking about how the historical record will view this stuff. So here’s how it actually untangles:

Apparently this guy started when he stumbled across my name on a page about the old ADVENTURE game. In fact, this is where the “bird release” hook comes from in the first place!

Back in ARPANET days at UCLA, I had a project called “Touch-Tone UNIX” (it was described in a paper I presented at a USENIX conference several hundred years ago). I was pretty proud of the fact that the entire name was composed of what was then *two* AT&T trademarks.

So this used a Votrax speech synth, heavily modified UNIX text-to-speech code, and a touch-tone modem feeding into the system via a DEC PDP-11/70 serial port (running a driver I had also heavily modified). I created an early (probably the first) system for mapping touch-tone presses to full ASCII. Control-G was played as the word “BING!”

The original purpose for all this was an adjunct of my work on newswire scanning systems (I had an “underground” feed of the teletype speed AP wire coming to me from a “friendly” source over the ARPANET). I collected all this (even at TTY speed the data builds up over time) and processed it in various ways for searching and retrieval, including via voice announcements and automatic phone call notifications (that’s where the voice synth came in, of course). The newswire scanning code eventually attracted the attention of an intelligence agency subcontractor, but nothing ultimately came of them. This is a *different* story than the day I was sitting in the UCLA ARPANET machine room and a colleague suddenly came in and said, “Hey Lauren, get out here, two guys from NSA are looking for you.” Yeah, that’s a different saga.

Anyway, Touch-Tone UNIX was a quite general purpose platform in its way, and you could run arbitrary UNIX commands over the phone and it would try speak the results in a reasonable way. You can guess what happened. One of the most popular uses turned out to be playing ADVENTURE over the system. And so I indeed did have graduate students waiting for tables in Westwood eateries, and playing ADVENTURE from payphones — and confusing patrons by suddenly exclaiming loudly, “I released the bird!”

Now what about “UCNET” and satellites? This is conflation of two other of my projects. One was my UULINK software, which was the *first* non-UNIX UUCP implementation (Wikipedia of course doesn’t even mention it, and I’m not enough of a masochist to try fix stuff on Wikipedia). It included a UUCP/ARPANET mail gateway and RFC-compliant mail handling and such. At the time it was considered highly specialized but was quite widely used in a variety of commercial, government, and other applications, including some of its code being adopted for specialized “high speed” modem communications modes for UUCP. The original code was written for and ran under DOS, migrated from my earlier experiments in this area on CP/M. This is from the period where my incoming UUCP phone line to my own UULINK system typically got a call every three minutes or so 24/7, from educational sites, DEC, and Bell Labs sites around the country. My published email address around that time looked like:

ARPA: vortex!lauren@LBL-CSAM
UUCP: {decvax, ihnp4, harpo, ucbvax!lbl-csam, randvax}!vortex!lauren
(Yeah, my “vortex” goes way, way back, well before it became among the first 40 dot-com domains issued 30 years ago.)

The satellite angle was my STARGATE project (this was also described and presented in a USENIX paper). STARGATE was an experiment in sending Netnews articles over the vertical blanking interval of SuperStation WTBS (based in Atlanta, but available all over the country by cable). A very early effort at data over cable, you’d use a special (too expensive) box that would connect to your TV cable line, tune it to WTBS, and get a continuous Netnews data feed. I installed the data equipment at the WTBS uplink myself. Remarkably, someone who worked at the facility back then very recently (out of the blue!) sent me an old video of the data shack where this was all installed at the base of the big uplink dishes. Obviously this was a one-way system — you’d submit articles via UUCP for example (so my UULINK system was integrated with this, along with other systems) — but since most people read far more than they write, this actually worked pretty well. The cost factors made it impractical in the long run though — those decoder boxes were pricey and at the time cable penetration wasn’t all that great where it needed to be (e.g., inside schools, businesses, etc.) But it was quite interesting and a lot of fun.

That’s the thumbnail of all this anyway. Maybe it’ll help to avoid confusion in the future. Probably not. That’s history for ya’.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

A Proposal for Dealing with Terrorist Videos on the Internet

(Original posting date: 21 December 2015)

As part of the ongoing attempts by politicians around the world to falsely demonize the Internet as a fundamental cause of (or at least a willing partner in) the spread of radical terrorist ideologies, arguments have tended to focus along two parallel tracks.

First is the notorious “We have to do something about evil encryption!” track. This is the dangerously loony “backdoors into encryption for law enforcement and intelligence agencies” argument, which would result in the bad guys having unbreakable crypto, while honest citizens would have their financial and other data made vastly more vulnerable to attacks by black hat hackers as never before. That this argument is made by governments that have repeatedly proven themselves incapable of protecting citizens’ data in government databases makes this line of “reasoning” all the more laughable. More on this at:

Why Governments Lie About Encryption Backdoors

The other track in play relates to an area where there is much more room for reasoned discussion — the presence on the Net of vast numbers of terrorist-related videos, particularly the ones that directly promote violent attacks and other criminal acts.

Make no mistake about it, there are no “magic wand” solutions to be found for this problem, but perhaps we can move the ball in a positive direction with some serious effort.

Both policy and technical issues must be in focus.

In the policy realm, all legitimate Web firms already have Terms of Service (ToS) of some sort, most of which (in one way or another) already prohibit videos that directly attempt to incite violent attacks or display actual acts such as beheadings (and, for example, violence to people and animals in non-terrorism contexts). How to more effectively enforce these terms I’ll get to in a moment.

When we move beyond such directly violent videos, the analysis becomes more difficult, because we may be looking at videos that discuss a range of philosophical aspects of radicalism (both international and/or domestic in nature, and sometimes related to hate groups that are not explicitly religious). Often these videos do not make the kinds of direct, explicit calls to violence that we see in that other category of videos discussed just above.

Politicians tend to promote the broadest possible censorship laws that they can get away with, and so censorship tends to be a slippery slope that starts off narrowly and rapidly expands to other than the originally targeted types of speech.

We must also keep in mind that censorship per se is solely a government power — they’re the ones with the prison cells and shackles to seriously enforce their edicts. The Terms of Service rules promulgated by Web services are independent editorial judgments regarding what they do or don’t wish to host on their facilities.

My view is that it would be a lost cause, and potentially a dangerous infringement on basic speech and civil rights, to attempt the eradication from the Net of videos in the second category I noted — the ones basically promoting a point of view without explicitly promoting or displaying violent acts. It would be all too easy for such attempts to morph into broader, inappropriate controls on speech. And frankly, it’s very important that we be able to see these videos so that we can analyze and prepare for the philosophies being so promoted.

The correct way to fight this class of videos is with our own information, of course. We should be actively explaining why (for example) ISIL/ISIS/IS/Islamic State/Daesh philosophies are the horrific lies of a monstrous death cult.

Yes, we should be doing this effectively and successfully. And we could, if we put sufficient resources and talent behind such information efforts. Unfortunately, Western governments in particular have shown themselves to be utterly inept in this department to date.

Have you seen any of the current ISIL recruitment videos? They’re colorful, fast-paced, energetic, and incredibly professional. Absolutely state of the art 21st century propaganda aimed at young people.

By contrast, Western videos that attempt to push back against these groups seem more on the level of the boring health education slide shows we were shown in class back when I was in elementary school.

Small wonder that we’re losing this information war. This is something we can fix right now, if we truly want to.

As for that other category of videos — the directly violent and violence-inciting ones that most of us would agree have no place in the public sphere (whether they involve terrorist assassinations or perverts crushing kittens), the technical issues involved are anything but trivial.

The foundational issue is that immense amounts of video are being uploaded to services like YouTube (and now Facebook and others) at incredible rates that make any kind of human “previewing” of materials before publication entirely impractical, even if there were agreement (which there certainly is not) that such previewing was desirable or appropriate.

Services like Google’s YouTube run a variety of increasingly sophisticated automated systems to scan for various content potentially violating their ToS, but these systems are not magical in nature, and a great deal of material slips through and can stay online for long periods.

A main reason for this is that uploaders attempting to subvert the system — e.g., by uploading movies and TV shows to which they have no rights, but that they hope to monetize anyway — employ a vast range of techniques to try prevent their videos from being detected by YouTube’s systems. Some of these methods render the results looking orders of magnitude worse than an old VHS tape, but the point is that a continuing game of whack-a-mole is inevitable, even with continuing improvements in these systems, especially considering that false positives must be avoided as well.

These facts tend to render nonsensical recent claims by some (mostly nontechnical) observers that it would be “simple” for services like YouTube to automatically block “terrorist” videos, in the manner that various major services currently detect child porn images. One major difference is that those still images are detected via data “fingerprinting” techniques that are relatively effective on known still images compared against a known database, but are relatively useless outside the realm of still images, especially for videos of varied origins that are routinely manipulated by uploaders specifically to avoid detection. Two completely different worlds.

So are there practical ways to at least help to limit the worst of the violent videos, the ones that most directly portray, promote, and incite terrorism or other violent acts?

I believe there are.

First — and this would seem rather elementary — video viewers need to know that they even have a way to report an abusive video. And that mechanism shouldn’t be hidden!

For example, on YouTube currently, there is no obvious “abuse reporting” flag. You need to know to look under the nebulous “More” link, and also realize that the choice under there labeled “Report” includes abuse situations.

User Interface Psychology 101 tells us that if viewers don’t see an abuse reporting choice clearly present when viewing the video, it won’t even occur to many of them that it’s even possible to report an abusive video, so they’re unlikely to go digging around under “More” or anything else to find such a reporting system..

A side effect of my recommendation to make an obvious and clear abuse reporting link visible on the main YouTube play page (and similarly placed for other video services) would be the likelihood of a notable increase in the number of abuse reports, both accurate and not. (I suspect that the volume of reports may have been a key reason that abuse links have been increasingly “hidden” on these services’ interfaces over time).

This is not an inconsequential problem. Significant increases in abuse reports could swamp human teams working to evaluate them and to make the often complicated “gray area” determinations about whether or not a given reported video should stay online. Again, we’re talking about a massive scale of videos.

So there’s also a part two to my proposal.

I suggest that consideration be given to using volunteer or paid, “crowdsourced” populations of Internet users — on a large scale designed to average out variations in cultural attitudes for any given localizations — to act as an initial “filter” for specific classes of abuse reports regarding publicly available videos.

There are all kinds of complicated and rather fascinating details in even designing a system like this that could work properly, fairly, and avoid misuse. But the bottom line would be to help reduce to manageable levels the abuse reports that would typically reach the service provider teams, especially if significantly more reports were being made — and these teams would still be the only individuals who could actually choose to take specific reported videos offline.

Finding sufficient volunteers for such a system — albeit ones with strong stomachs considering what they’ll be viewing — would probably not prove to be particularly difficult. There are lots of folks out there who want to do their parts toward helping with these issues. Nor is it necessarily the case that only volunteers must fill these roles. This is important work, and finding some way to compensate them for their efforts could prove worthwhile for everyone concerned.

This is only a thumbnail sketch of the concept of course. But these are big problems that are going to require significant solutions. I fervently hope we can work on these issues ourselves before politicians and government bureaucrats impose their own “solutions” that will almost certainly do far more harm than good, with resulting likely untold collateral damage as well.

I believe that we can make serious inroads in these areas if we choose to do so.

One thing’s for sure though. If we don’t work to solve these problems ourselves, we’ll be giving governments yet another excuse for the deployment of ever more expansive censorship agendas that will ultimately muzzle us all.

Let’s try keep that nightmare from happening.

All the best to you and yours for the holidays!

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Why Governments Lie About Encryption Backdoors

(Original posting date: 13 December 2015)

Despite a lack of firm evidence to suggest that the terrorist attackers in Paris, in San Bernardino, or at the Planned Parenthood center in Colorado used strong (or perhaps any) encryption to plan their killing sprees, government authorities around the planet — true to the long-standing predictions of myself and others that terrorist attacks would be exploited in this manner — are once again attempting to leverage these horrific events into arguments for requiring “backdoor” government access to the encryption systems that increasingly protect ordinary people everywhere.

This comes despite the virtual unanimity among reputable computer scientists and other encryption experts that such “master keys” to these encryption systems that protect our financial and ever more aspects of our personal lives would be fundamentally weakened by such a government access mechanism, exposing us all to exploits both via mistakes and purposeful abuse, potentially by governments and outside attacks on our data.

It’s difficult — one might say laughable — to take many of these government arguments seriously even in the first place, given the gross incompetence demonstrated by the U.S. government in breaches that exposed millions of citizens’ personal information and vast quantities of NSA secrets — and with similar events occurring around the world at the hands of other governments.

But there are smart people in government too, who fully understand the technical realities of modern strong encryption systems and how backdoors would catastrophically weaken them.

So why do they continue to argue for these backdoor mechanisms, now more loudly than ever?

The answer appears to be that they’re lying to us.

Or if lying seems like too strong a word, we could alternatively say they’re being “incredibly disingenuous” in their arguments.

You don’t need to be a computer scientist to follow the logic of how we reach this unfortunate and frankly disheartening determination regarding governments’ invocation of terrorism as an excuse for demanding crypto backdoors for authorities’ use.

We start with a fundamental fact.

The techniques of strong, uncrackable crypto are well known. The encryption genies have long since left their bottles. They will not return to them, no matter how much governments may plead, cajole, or threaten.

In fact, the first theoretically unbreakable crypto mechanisms reach back at least as far as the 19th century.

But these systems were only as good as the skill and discipline of their operators, and errors in key management and routine usage could create exploitable and crackable weaknesses — as they did in the case of the German-used “Enigma” system during World War II, for example.

The rise of modern computer and communications technologies — desktops, smartphones, and all the rest — have allowed for the “automation” of new, powerful encryption systems in ways that make them quite secure even in the hands of amateurs, and as black hat hacking exploits have subverted the personal data of millions of persons, major Web and other firms have reacted by deploying ever more powerful crypto foundations to help protect these environments that we all depend upon.

Let’s be very, very clear about this. The terrorist groups that governments consistently claim are the most dangerous to us — al-Qaeda, ISIL (aka ISIS, IS, Islamic State, or Daesh), the less talked about but at least equally dangerous domestic white supremacist groups, and others — all have access to strong encryption systems. These apps are not under the control of the Web firms that backdoor proponents attempt to frame as somehow being “enemies” of law enforcement — due to these firms’ enormously justifiable reluctance to fundamentally weaken their systems with backdoors that would expose us all to data hacking attacks.

What’s more — and you can take this to the bank — ISIL, et al. are extraordinarily unlikely to comply with requests from governments to “Please put backdoors into your homegrown strong crypto apps for us? Pretty please with sugar on it?”

Governments know this of course.

So why do they keep insisting publicly that crypto backdoors are critical to protect us from such groups, when they know that isn’t true?

Because they’re lying — er, being disingenuous with us.

They know that the smart, major terrorist groups will never use systems with government-mandated backdoors for their important communications, they’ll continue to use strong systems developed in and/or distributed by countries without such government mandates, or their own strong self-designed apps.

So it seems clear that the real reason for the government push for encryption backdoors is an attempt not to catch the most dangerous terrorists that they’re constantly talking about, but rather a selection of “low-hanging fruit” of various sorts.

Inept would-be low-level terrorists. Drug dealers. Prostitution rings. Free speech advocates and other political dissidents. You know the types.

That is, just about everybody EXCEPT the most dangerous terrorist groups that wouldn’t go near backdoored encryption systems with a ten foot pole, yet are the very groups governments are loudly claiming backdoor systems are required to fight.

Now, there’s certainly a discussion possible over whether or not massively weakening crypto with backdoors is a reasonable tradeoff to try catch some of the various much lower-level categories of offenders. But given the enormous damage done to so many people by attacks on their personal information through weak or improperly implemented encryption systems, including by governments themselves, that seems like an immensely difficult argument to rationally make.

So our logical analysis leads us inevitably to a pair of apparently indisputable facts.

Encryption systems weakened by mandated backdoors would not be effective in fighting the terrorists that governments invoke as their reason for wanting those backdoors in the first place.

And encryption weakened by mandated backdoors would put all of us — the ordinary folks around the planet who increasingly depend upon encrypted data and communications systems to protect the most intimate aspects of our personal lives — at an enormous risk of exposure from data breaches and associated online and even resulting physical attacks, including via exploitation from foreign governments and terrorist groups themselves.

Encryption backdoors are a gleeful win-win for terrorists and a horrific lose-lose for you, me, our families, our friends, and for other law-abiding persons everywhere. Backdoors would result in the worst of the bad guys having strong protections for their data, and the rest of us being hung out to dry.

It’s time to permanently close and lock the door on encryption backdoors, and throw away the key.

No pun intended, of course.

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Google, Hillary, and the Search Conspiracy Kooks

I’ll let you in on a little secret. I have better ways to spend my Saturdays than writing blog posts about nutso conspiracy theories. Seriously, I really do. But the conspiracy fanatics are again on a wacky rampage, this time with the ludicrous claim that Google is purposely manipulating search results to favor Hillary Clinton over racist, misogynist con-man Donald Trump.

Whether you support Hillary, Trump, or the Man in the Moon, the sheer illogic of these new conspiracy claims make a typical Federico Fellini film look staid and sane by comparison.

The fundamental problem with the vast majority of conspiracy theories is that they require the assumed perpetrators to be inept idiots. Because clearly, we’d almost never know about or even suspect conspiracies managed by the smart folks.

Case in point, the current Google/Hillary conspiracy crud.

The conspiracy nuts would have us believe that Google is purposely (and obviously!) manipulating search “autocomplete” results to de-emphasize negative completions regarding Hillary Clinton.

This makes about as much sense as running a foot race on a motorcycle. It would be immediately clear that something was amiss — and what kind of lamebrain conspiracy would that be?

Google has every reason to keep their search results useful and honest, both for purely ethical reasons and since their users can switch to other firms with a single click of the mouse.

But for the sake of the argument, if I were Google and I wanted to manipulate search results in a dastardly, evil way (cue the Darth Vader theme), I’d be trying to hide negative Hillary search results in the main Google search index, not in autocomplete.

And yet if you do a regular Google Search for any negative topics regarding Hillary Clinton — even the nuttiest ones that the haters spew on endlessly about — you’ll get enough pages of results back to keep you in hardcore conspiracy heaven for a lifetime.

So what’s the problem with Google Search autocomplete?

Nothing. Autocomplete is working exactly as it should.

In fact, if I type in “hillary e” I immediately get a list that features the silly “email indictment” stories. If I enter “hillary cr” I get back “crazy” – “crying” – “crooked” – with results pointing at vast numbers of negative, right-wing trash sites.

So why when you simply enter “hillary” don’t all those negative completions appear?

Well, for the same reason that “trump ra” returns autocomplete results like “racism” and “racist” but “trump” alone does not.

If we go back a few years, there were widely publicized complaints and even lawsuits arguing that Google Search autocomplete overemphasized “negative” or somehow “undesirable” information about some searched individuals and other topics– even though those autocomplete results were valid on an algorithmic basis.

And over time, we can see that autocomplete has evolved by returning more “generic” completions until the user’s query becomes a bit more specific.

Whether or not one personally agrees with this mode of operation, the important point is that it doesn’t favor anyone — it behaves the same way for everyone. Hillary. Trump. Even Justin Bieber.

There’s no Google search political favoritism. No conspiracy. Nothing to see here other than honest search results. Move along …

I realize that this is disappointing to Trump fans and to conspiracy aficionados in general.

But hey, there’s always other crazy conspiracy theories to keep you busy. The moon landings. The Illuminati. Yeah, and reptilian lizard people. Hell, even Francis Bacon vs. Shakespeare!

Have at it, gang!

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Why Free Speech Is Even More Important Than Privacy

Supporters of the EU’s horrific “Right To Be Forgotten” (RTBF) generally make the implicit (and sometimes explicit) argument that privacy must take precedence over free speech.

As a privacy advocate for many years (I created my ongoing PRIVACY Forum in 1992) you might expect that I’d have at least some sympathy for that position.

Such an assumption would be incorrect. At least in the context of censorship in general — and of RTBF in particular — I disagree strongly with such assertions.

It’s not because privacy is unimportant. In fact, I feel that free speech is more important than privacy precisely because privacy itself is so important!

It’s all a matter of what you know, what you don’t know, and what you don’t know that you don’t know.

Basically, there are two categories of censorship.

The first consists largely of materials that you know exist, but that you are forbidden by (usually government) edict from accessing. Such items may in practice be difficult to obtain, or simple to obtain, but in either case may carry significant legal penalties if you actually obtain them (or in some cases, even try to obtain them). An obvious example of this category is sexually-explicit materials of various sorts around the world.

Ironically, while this category could encompass everything from classic erotic literature to the most depraved pornography involving children, overall it is the lesser insidious form of censorship, since at least you know that it exists.

The even more evil type of censorship — the sort that is fundamental to the “Right To be Forgotten” concept and an essential element of George Orwell’s “Nineteen Eighty-Four” — is the effort to hide actual information in a manner that would prevent you from even knowing that it exists in the first place.

Whether it’s a war with “Eastasia” or a personal past that someone would prefer that you not know about, the goal is for you not to realize, to not even suspect, that some negative information is out there that you might consider to be relevant and important.

Combine this with the escalating RTBF demands of France and other countries for global censorship powers over Google’s and other firms’ search results, and it becomes clear why privacy itself can be decimated under RTBF and similar forms of censorship.

Because if individual governments — some of whom already impose draconian information controls domestically — gain global censorship powers, we can’t possibly assume that we even know what’s really going on in respect to negative impacts on our privacy!

In other words, RTBF and similar forms of censorship can act to hide from us the very existence of entities, facts and efforts that could be directly damaging to our privacy in a myriad number of ways.  And if we don’t know that these even exist, how can we possibly make informed evaluations of our privacy and the privacy of our loved ones?

To make matters worse, much of this applies not only to privacy issues, but to an array of crucial security issues as well.

Attempting to maintain privacy and security in a regime of global censorship designed to hide facts from the public — irrespective of the occasionally laudable motives for such actions in some specific cases — is like trying to build a skyscraper on a foundation of quicksand.

You don’t need to be an architect, a computer scientist — or a privacy expert — to recognize the insanity of such an approach.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

France’s Guillotining of Global Free Speech Continues

The war between France and Google — with France demanding that Google act as a global censor, and Google appealing France’s edicts — shows no signs of abating, and the casualty list could easily end up including most of this planet’s residents.

As soon as the horrific “Right To Be Forgotten” (RTBF) concept was initially announced by the EU, many observers (including myself) suspected that the “end game” would always be global censorship, despite efforts by Google and others to reach agreements that could limit EU censorship to the EU itself.

This is the heart of the matter. France — and shortly we can be sure a parade of such free speech loathing countries like Russia, China, and many others — is demanding that Google remove search results for third-party materials on a global basis from all Google indexes around the world.

What this means is that even though I’m sitting right here in Los Angeles, if I dare to write a completely accurate and USA-legal post that the French government finds objectionable, France is demanding the right to force Google (and ultimately, other search engines and indexes) to remove key references to my posting from Google and other search results. For everyone. Everywhere. Around the world. Because of … France.

It’s nonsensical on its face but incredibly dangerous. It’s a dream of every dictator and legions of bureaucrats down through history, brought to a shiny 21st century technological reality.

You don’t have to be a computer scientist to realize that if every country in the world has a veto power over global search results, the lightspeed race to the lowest common denominator of sickly search results pablum would make Einstein’s head spin.

Proponents of these censorship regimes play the usual sorts of duplicitous word games of censorship czars throughout history. They claim it’s for the good of all, and that it’s not “really” censorship since “only” search results are involved.

Well here’s something you can take to the bank. Let’s leave aside for the moment the absolute truth that — given the enormous scale of the Web — hiding search results is effectively largely the same as hiding most source content itself as far as most people are concerned. But even if we ignore this fact, the truth of the matter is that it won’t be long before these same governments are also demanding the direct censorship of source material websites as well as search results.

However small the “forbidden information” leakage past the censorship of search results themselves, government censors will never be satisfied. They never are. In the history of civilization, they’ve never been satisfied.

A grand irony of course is that the very rise of Internet technology has been the potential enabler of centrally-mandated censorship to a degree never imagined even twenty years ago. For those of us who’ve spent our professional lives working to build these systems to foster the open spread of information, seeing our technologies turned into the tools of tyrants is disheartening to say the least.

It is however encouraging that firms like Google are continuing to fight the good fight against governments’ censorship regimes. Frankly, it will take firms on the scale of Google — along with support by masses of ordinary folks like us — to have any chance at all of keeping France and other governments around the world from turning the Internet into their own personal information control fiefdoms.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.