The Coming Government Showdown over Live Video Streaming

Views: 398

Over the last few days, we’ve dramatically seen the force of Internet live video streaming, and the obvious hints of policy battles to come regarding this powerful technology are clearly emerging.

Beyond the tragic images of a man shot to death by police in his car, and then live scenes of a sniper in Dallas who ultimately killed five officers, we’ve already seen other ugly shadows of what might become the new normal, including a streamed rape and suicide — both streamed by the perpetrators themselves for maximal publicity.

And yes, this is only the beginning. For while it has been possible to stream live video from portable devices since years ago, only now has the concept reached a critical mass, an “inflection” point where it is likely to have enormous impact on society at large.

While most of the attention to date has been on Facebook’s video streaming app, Google and other firms also have live streaming services, and that number can only be expected to grow for the foreseeable future, around the world.

Notably, these streaming systems typically include the means for viewers to comment live back to the video originators during the streams themselves, to do everything from expressing admiration or condemnation, to “simply” urging them on.

The positive public interest and probative value in the streaming of many public events is fairly obvious in most cases.

But even in the public space the associated dilemmas are vast.

Unfortunately, large audiences can bring out the worst in some people, and there is an enormous range of potential abuse for this technology in an ecosystem of unfiltered live streaming — in terms of risk-taking behavior to please your streaming audience, encouraging violence (either explicitly or implicitly), privacy attacks, and other abuses.

Even when no harm is actually intended, the mere fact of a live streamed dramatic event with a significant viewership will in some situations lead to potentially dangerous “flash crowds” as nearby viewers rush to participate in person.

And while these risks exist aplenty even with streaming from public places, the potential problems likely multiple by orders of magnitude when we consider live video streaming from private homes or businesses, perhaps by surreptitious means.

The bottom line is that live video streaming is a quintessential tool. It can be used for enormous good that could greatly enhance public knowledge and participatory democracy. It can also provide a morbid audience and incentive for hideous monsters (including both individuals and groups) whose real world streamed depravities could make fictional “torture porn” films pale by comparison.

So we find ourselves facing a familiar dilemma. If live video streaming firms don’t do the hard policy work required to provide reasonable controls over and filtering of this content, we can be sure that governments around the world — both of their own volition and pressured by their citizens — will move forcefully to enact control and censorship regimes to meet their perceived agendas.

And history tells us that once that kind of censorship takes hold, it’s extremely difficult to stop from spreading in all directions.

This makes it more imperative than ever that we move forward toward establishing best practices and policies to harness this uber-powerful technology in a reasonable manner, before governments move in with possibly knee-jerk “solutions” that will almost certainly make matters worse, not better.

I don’t claim to have any magic wands available for addressing these complicated issues, though my gut feeling is that we should be able to harness the enormous crowdsourcing power of the Net to rapidly categorize streams in real time and trigger filtering or other actions as appropriate.

But just sitting on our hands about this is not a viable option. That is, unless our goal is to see an incredibly useful technology being branded as “the enemy” just as it’s really beginning to flower.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

How Ancient Monopolies Keep You from Getting Decent Internet Service

Views: 901

Many of us tend to assume that here in U.S. we have the most advanced technologies on the planet. So it may be startling to learn that by global Internet standards, numerous experts consider us to be living in something of a Stone Age Internet nation.

The reality is stark. Many countries in the world pay far less for their Internet services than we do, and get much faster and more reliable services in the bargain. While many countries have set a national goal of fiber optics directly connecting every home and business, here in the United States phone companies still are arguing that snail’s pace Net connections should qualify as broadband.

Even when relatively “high” Internet access speeds are available via cable, they tend to be mainly in the downstream direction. For example, I have the highest cable modem speed available in my location here in L.A., which is 300 Mb/s downstream — but only 20 Mb/s up. Obviously, high upstream speeds are important for a range of applications (not just limited to obvious ones like remote data backup). Cable modem speeds are getting better, but the fundamentals of cable system technology continue to dampen upstream speeds.

You might reasonably ask how so many other countries have been able to get much better Internet access to their residents, compared with us here in the country that invented the Internet.

The detailed reasons are complicated technically, legally, and very much politically, but the bottom line is that the Internet ecosystem here in the U.S. has long been rigged against effective competition, a direct outgrowth of early telecommunications monopoly environments.

One example of this may be visible right outside your window.

Have you ever wondered who owns those “telephone” poles throughout your community, or the underground cables and conduits in some towns?

The short answer is: What a mess!

Poles may be owned by power companies, by phone companies, by cable companies, or in some cases by communities themselves — or various combinations thereof. 

The land that these poles are planted in typically is in the form of an “easement” — a specific area of land still owned by the main property owner, but with access and other rights granted by government to various utilities and other firms. It works basically the same way with underground cables and conduits.

As you might imagine, easements can be the subject of complex and varied legal entanglements and disputes, even though most are granted when housing or commercial developments are being initially planned.

But for the sake of our discussion here right now, the most interesting aspect of easements is in older communities (for example, areas built up prior to the AT&T divestiture of 1984).

History matters in this context (as in so many other aspects of life) because when these easements were granted to communications companies back in the day, they were usually “monopoly” grants. That is, while we would probably agree even now that assuming a single water and/or power company would be logical, those historic easements were usually assuming only a single communications (phone) company, or later the original incumbent phone company plus a single cable TV company.

This is incredibly relevant today, because the entities controlling these easements, and that usually own the poles, cables, and conduits that everyone must use to provide landline communications services to homes or businesses, are quite powerfully in the catbird seat.

Here’s why.

In many countries, governments have national Internet plans that provide for robust competition in various ways. But here in the U.S., if you want to bring — for example — high speed fiber Internet to a community, you often have to deal with the incumbent telecom or other utility firms to gain access to those poles and/or underground facilities.

And those firms — like AT&T, Verizon, and the rest of the gang you likely are familiar with — have very little incentive to be particularly cooperative with new competitors bringing in far better services. In fact, the old guard firms have frequently pushed through laws — and/or filed lawsuits — aimed at preventing communities from encouraging or even permitting such competition.

So we find it not uncommon for the incumbents to demand exorbitant “pole attachment” or other access fees, or to delay and obfuscate as long as possible.

It’s important to remember that these incumbent firms typically only control these access assets because of those original monopoly grants from many decades ago — giving them exclusivity that is nonsensical and unfair so many years later. But they’ve become experts at milking every last possible dollar out of the jolly old monopoly days, even now!

If this sounds bad, it gets worse for the captive residents of many apartment buildings and commercial developments.

Building owners and landlords frequently view Internet access as a massive personal profit center, and engage in restrictive shenanigans — some of which can be viewed as illegal — to strike lucrative, and yes, monopoly deals with telecom firms, demanding sweetheart payments for access to their tenants, and treating those tenants as if they were medieval serfs. For more on this particularly seamy side of Internet access, please see Susan Crawford’s excellent recent article: “Dear Landlord: Don’t Rip Me Off When it Comes To Internet Access — When building owners get kickbacks from big providers it’s the tenants who lose.”

You might think that this sorry state of affairs would be pretty much obvious to everyone, but in our toxic political environment that would be very far from the truth.

In fact, there are many in Congress who don’t see any consumer problems here at all. Whether or not one chooses to consider these access issues under the “network neutrality” umbrella, many politicians who have long enjoyed the “generosity” of the incumbent telecom firms are lined up to block any attempts to improve the competitive landscape for Internet consumers, thereby condemning us to continued laughingstock status in the eyes of most other countries.

We do have some power though — in the voting booth. These issues tend to have local, state, and often federal components, and we’re unlikely to see significant improvements while lapdog beneficiaries of dominant Big Telecom remain in political control.

Or perhaps you’re satisfied with exorbitant prices and “Flintstones-class” Internet access throughput. Frankly, this far into the 21st century, I strongly believe that we can do much better than having so many of us running at bare feet pedal power Internet speeds.

Yabba dabba doo!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

A Rare Endorsement: When the Disk Drive Fails

Views: 569

As a matter of policy, I almost never make commercial endorsements. But I’m making an extremely rare exception today, because I feel that this particular firm may be able to save a whole lot of people a whole lot of grief.

Recently, I was called upon to help deal with a disk failure situation that rapidly appeared to be fairly hopeless. The disk in question was in a laptop (Linux ext3 format filesystem), and it had failed suddenly and hard — very hard. None of my usual tricks could revive it.

There were some backups, but all the ones less than a year old turned out to be largely useless due to corruption (I have a few choice words to say about “tar” that I’ll save for another day).

I found myself researching disk recovery firms — an area that I had never had occasion to examine in depth before. I quickly discovered that this category of business is replete with come-ons and abusive gimmicks.

For example, the promoted prices you see for most recovery firms typically bear no resemblance to reality except in the simplest of cases (e.g., when it’s just a matter of using software to undelete “deleted” files from a FAT32 disk).

But if the drive has actually suffered hardware damage (e.g. to the circuitry or platters) the price will typically skyrocket astronomically.

I also quickly discovered that most of the firms claiming they had “local” offices in big cities actually could only do those software-level operations there. Any hardware issues meant them shipping the drives to some central lab — meaning more delays.

In the midst of this rather gloomy research I stumbled across a firm called $300 Data Recovery.

Now if you’re like me, you’re a bit wary of firms that promote a price in their name. Hell, think of the expensive rebranding if your prices go up! But I was intrigued by their pricing chart, and by the fact that they were local to me here in L.A.’s San Fernando Valley, just a 20 minute or so shot straight down the 101. I like dealing with local firms when I can, since that means I can show up in person “to chat” if something goes wrong.

It’s a very small firm in a rather strange location for such an operation. There are just a handful of employees — various of my routine email communications were directly with the owner. It’s on the second floor of a strip mall on Ventura Blvd. in Studio City, surrounded by typical strip mall businesses about as logically far away from data recovery as you can imagine.

Yet they apparently do all their work in-house right there, including in their own clean room when necessary. When I took the bad disk over there, I was amused by their collection of horrifically failed open disk drives in the waiting area.

You can read the details of their pricing structure on their site, but the bottom line is that for most common situations, for a single disk already removed from a computer, they charge a flat $300 (for up to 2TB disks) if they can recover the data, otherwise — nothing. If there’s limited partial recovery, you can choose whether or not you want to pay that fee for what they recovered, based on a comprehensive list of recovered files that they email you. If you want priority service to go to the head of the queue and have them start working on a disk immediately, there’s a nonrefundable $50 up front, and an additional $150 if you accept the recovered data. So, in that case, the total is $500 — still a serious bargain. This doesn’t include shipping or the cost of a transfer drive — both quite reasonable and in my case avoided since I drove out to them and provided my own transfer drive for the recovered data.

They also handle larger disks and RAID arrays for additional (but still utterly reasonable) fees. And they happily execute data recovery nondisclosure agreements. By the way, they can target particular files for recovery by name if you wish, and can put crucial recovered files online in a secure location if you need to download them immediately.

In many cases they apparently can recover data in a day or two once starting work on a given disk. The disk I took them had to run in the “cloning” stage for more like five days to recover the maximum amount of data from crash-damaged platters, then another day or so for filesystem reconstruction. They keep you informed by email all through this process and respond virtually immediately to queries.

In the case of this particular disk, they ultimately recovered more than 99% (99.38% notes Mr. Spock) of the data! This doesn’t mean you get 99.38% of all the files back perfectly intact of course — since that remaining 0.62% can be scattered across the filesystem in various ways. They return the data in several different formats — recovered filesystem section with full hierarchy data, orphaned files without names and/or directories, and so on. “Grep” is very useful to locate specific files in those latter cases, of course, though the key files I was tasked to find were quickly located in the filesystem recovery section with their directory structures intact.

In fact, all important files (including some crucial databases) were recovered from that disk, nearly all 100% complete.

I’d call that a success by any measure.

I’ll add in passing that there seem to be some haters of this company out there, including somebody who bothered to build a whole site dedicated to trashing them — claiming they’re incompetent, that they pay people to give them good reviews, and other rants. I don’t know what the authors of that site are going on about — I can only speak to my own experience. When I got the recovered data back, it was noted that a positive review would be appreciated, but I certainly wasn’t offered any money or other compensation for one — nor would I have accepted such in any case. Nobody paid me nuttin’ for this blog posting!

As for their competency, I handed them a trashed, useless disk drive, they handed me back a disk loaded with important recovered data that was needed to fulfill an important mission — and at a completely reasonable price. That’s good enough for me! I know I would take any disk drive of mine to them in similar situations.

So if you’re ever in a tough spot with a failed disk, you might very well want to check these guys out — again, that’s $300 Data Recovery.

When I was getting ready to leave their shop with the recovered data disk, I mentioned to them that I appreciated all their great work and would enthusiastically recommend them — but frankly, I hoped to never have to deal with them myself again in a professional capacity!

It was clear from their smile that they knew exactly what I meant.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

Die Passwords! Die!

Views: 352

(Original posting date: 31 May 2013)

In one form or another — verbal, written, typed, semaphored, grunted, and more — passwords broadly defined have been part of our cultures pretty much since the dawn of humans at least. Whether an 18 character mixed-case password replete with unusual symbols, or the limb-twisting motions of a secret handshake, we’ve always needed means for authentication and identity verification, and we’ve long used the concept of a communicable “secret” of some kind to fill this need.

As we plow our way ever deeper into the 21st century, it is notable that most of our Internet and other computer-based systems still depend on the basic password motif for access control. And despite sometimes herculean efforts to keep password-based environments viable, it’s all too clear that we’re rapidly reaching the end of the road for this venerable mechanism.

That this was eventually inevitable has long been clear, but recent events seem to be piling up and pointing at a more rapid degeneration of password security than many observers had anticipated, and this is taking us quickly into the most complex realms of identity and privacy.

Advances in mathematical techniques, parallel processing, and particularly in the computational power available to password crackers (now often using very high speed graphics processing units to do the number crunching) are undermining long held assumptions about the safety of passwords of any given length or complexity, and rendering even hashed password files increasingly vulnerable to successful attacks. If a single configuration error allows such files to fall into the wrong hands, even the use of more advanced password hashing algorithms is no guarantee of protection against the march of computational power and techniques that may decimate them in the future.

What seems like an almost daily series of high profile password breaches has triggered something of a stampede to finally implement multiple-factor authentication systems of various kinds, which are usually a notch below even more secure systems that use a new password for every login attempt (that is, OTP – One-Time Password systems, which usually depend on a hardware device or smartphone app to generate disposable passwords).

As you’d imagine, the ultimate security of what we might call these “enhanced password” environments depends greatly on the quality of their implementations and maintenance. A well designed multiple factor system can do a lot of good, but a poorly built and vulnerable one can give users a false sense of security that is actually even more dangerous than a basic password system alone.

Given all this, it’s understandable that attention has now turned toward more advanced methodologies that — we hope — will be less vulnerable than any typical password-based regimes.

There are numerous issues. Ideally, you don’t want folks routinely using passwords at all in the conventional sense. Even relatively strong passwords become especially problematic when they’re used on multiple systems — a very common practice. The old adage of the weakest link in the chain holds true here as well. And the less said about weak passwords the better (such as “12345” — the kind of password, as noted in Mel Brooks’ film “Spaceballs” — that “an idiot would have on his luggage”) — or worse.

So, much focus now is on “federated” authentication systems, such as OAuth and others.

At first glance, the concept appears simple enough. Rather than logging in separately to every site, you authenticate to a single site that then (with your permission) shares your credentials via “tokens” that represent your desired and permitted access levels. Those other sites never learn your password per se, they only see your tokens, which can be revoked on demand. For example, if you use Google+, you can choose to use your Google+ credentials to access various other cooperating sites. An expanding variety of other similar environments are also in various stages of availability.

This is a significant advance. But if you’re still using simple passwords for access to a federated authentication system, many of the same old vulnerabilities may still be play. Someone gaining illicit access to your federated identity may then have access to all associated systems. This strongly suggests that when using federated login environments you should always use the strongest currently available practical protections — like multiple-factor authentication.

All that being said, it’s clear that the foreseeable future of authentication will appropriately depend heavily on federated environments of one form or another, so a strong focus there is utterly reasonable.

Given that the point of access to a federated authentication system is so crucial, much work is in progress to eliminate passwords entirely at this level, or to at least associate them with additional physical means of verification.

An obvious approach to this is biometrics — fingerprints, iris scans, and an array of other bodily metrics. However, since biometric identifiers are so associated with law enforcement, cannot be transferred to another individual in cases of emergency, and are unable to be changed if compromised, the biometric approach alone may not be widely acceptable for mass adoption outside of specialized, relatively high-security environments.

Wearable devices may represent a much more acceptable compromise for many more persons. They could be transferred to another individual when necessary (and stolen as well, but means to render them impotent in that circumstance are fairly straightforward).

A plethora of possibilities exist in this realm — electronically enabled watches, bracelets, rings, temporary tattoos, even swallowable pills — to name but a few. Sound like science-fiction? Nope, all of these already exist or are in active development.

Naturally, such methods are useless unless the specific hardware capabilities to receive their authentication signals is also present, when and where you need it, so these devices probably will not be in particularly widespread use for the very short term at least. But it’s certainly possible to visualize them being sold along with a receiver unit that could be plugged into existing equipment. As always, price will be a crucial factor in adoption rates.

Yet while the wearable side of the authentication equation has the coolness factor, the truth is that it’s behind the scenes where the really tough challenges and the most seriously important related policy and engineering questions reside.

No matter the chosen methods of authentication — typed, worn, or swallowed — one of the most challenging areas is how to appropriately design, deploy, and operate the underlying systems. It is incumbent on us to create powerful federated authentication environments in ways that give users trustworthy control over how their identity credentials are managed and shared, what capabilities they wish to provide in specific environments, how these factors interact with complex privacy parameters, and a whole host of associated questions, including how to provide for pseudonymous and anonymous activities where appropriate.

Not only do we need to understand the basic topology of these questions and develop policies that represent reasonable answers, we must actually build and deploy such systems in secure and reliable ways, often at enormous scale by historical standards. It’s a fascinating area, and there is a tremendous amount of thinking and work ongoing toward these goals — but in many ways we’re only just at the beginning. Interesting times.

One thing is pretty much certain, however. Passwords as we’ve traditionally known them are on the way out. They are doomed. The sooner we’re rid of them, the better off we’re all going to be.

Especially if your password is “12345” …

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.

No, I did not have a project where I “released birds” over the ARPANET

Views: 296

So I get this email from a researcher asking me about my project to release birds over satellites, and ARPANET, and what he called “UCNET” — and I’ll admit the initial message was puzzling. I don’t recall any significant bird release projects in my past. After a little more back and forth, I figured out what was going on — he had somehow conflated several of my past projects in a very amusing manner. However, this got me thinking about how the historical record will view this stuff. So here’s how it actually untangles:

Apparently this guy started when he stumbled across my name on a page about the old ADVENTURE game. In fact, this is where the “bird release” hook comes from in the first place!

Back in ARPANET days at UCLA, I had a project called “Touch-Tone UNIX” (it was described in a paper I presented at a USENIX conference several hundred years ago). I was pretty proud of the fact that the entire name was composed of what was then *two* AT&T trademarks.

So this used a Votrax speech synth, heavily modified UNIX text-to-speech code, and a touch-tone modem feeding into the system via a DEC PDP-11/70 serial port (running a driver I had also heavily modified). I created an early (probably the first) system for mapping touch-tone presses to full ASCII. Control-G was played as the word “BING!”

The original purpose for all this was an adjunct of my work on newswire scanning systems (I had an “underground” feed of the teletype speed AP wire coming to me from a “friendly” source over the ARPANET). I collected all this (even at TTY speed the data builds up over time) and processed it in various ways for searching and retrieval, including via voice announcements and automatic phone call notifications (that’s where the voice synth came in, of course). The newswire scanning code eventually attracted the attention of an intelligence agency subcontractor, but nothing ultimately came of them. This is a *different* story than the day I was sitting in the UCLA ARPANET machine room and a colleague suddenly came in and said, “Hey Lauren, get out here, two guys from NSA are looking for you.” Yeah, that’s a different saga.

Anyway, Touch-Tone UNIX was a quite general purpose platform in its way, and you could run arbitrary UNIX commands over the phone and it would try speak the results in a reasonable way. You can guess what happened. One of the most popular uses turned out to be playing ADVENTURE over the system. And so I indeed did have graduate students waiting for tables in Westwood eateries, and playing ADVENTURE from payphones — and confusing patrons by suddenly exclaiming loudly, “I released the bird!”

Now what about “UCNET” and satellites? This is conflation of two other of my projects. One was my UULINK software, which was the *first* non-UNIX UUCP implementation (Wikipedia of course doesn’t even mention it, and I’m not enough of a masochist to try fix stuff on Wikipedia). It included a UUCP/ARPANET mail gateway and RFC-compliant mail handling and such. At the time it was considered highly specialized but was quite widely used in a variety of commercial, government, and other applications, including some of its code being adopted for specialized “high speed” modem communications modes for UUCP. The original code was written for and ran under DOS, migrated from my earlier experiments in this area on CP/M. This is from the period where my incoming UUCP phone line to my own UULINK system typically got a call every three minutes or so 24/7, from educational sites, DEC, and Bell Labs sites around the country. My published email address around that time looked like:

ARPA: vortex!lauren@LBL-CSAM
UUCP: {decvax, ihnp4, harpo, ucbvax!lbl-csam, randvax}!vortex!lauren
(Yeah, my “vortex” goes way, way back, well before it became among the first 40 dot-com domains issued 30 years ago.)

The satellite angle was my STARGATE project (this was also described and presented in a USENIX paper). STARGATE was an experiment in sending Netnews articles over the vertical blanking interval of SuperStation WTBS (based in Atlanta, but available all over the country by cable). A very early effort at data over cable, you’d use a special (too expensive) box that would connect to your TV cable line, tune it to WTBS, and get a continuous Netnews data feed. I installed the data equipment at the WTBS uplink myself. Remarkably, someone who worked at the facility back then very recently (out of the blue!) sent me an old video of the data shack where this was all installed at the base of the big uplink dishes. Obviously this was a one-way system — you’d submit articles via UUCP for example (so my UULINK system was integrated with this, along with other systems) — but since most people read far more than they write, this actually worked pretty well. The cost factors made it impractical in the long run though — those decoder boxes were pricey and at the time cable penetration wasn’t all that great where it needed to be (e.g., inside schools, businesses, etc.) But it was quite interesting and a lot of fun.

That’s the thumbnail of all this anyway. Maybe it’ll help to avoid confusion in the future. Probably not. That’s history for ya’.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.