Greetings. In a recent New York Times op-ed, former FCC Chairman Kennard characterized the network neutrality debate as simply a battle between the extremely wealthy and the merely rich, and suggested that it was distracting us from what he considers to be the truly important telecom-related issues. No matter where one stands on network neutrality questions, Kennard is missing the point -- the outcome of this controversy will affect every consumer who ever comes into contact with the Internet in any manner. The interests of ordinary consumers and small businesses all too often are left in the lurch regarding important issues. That's what's happening now in the continuing arguments over network neutrality on the Internet, where anti-neutrality forces -- primarily the large telephone companies and other increasingly conglomerated telecom giants -- are attempting to manipulate the debate to their own advantage, and to the detriment of nearly everyone else. Starting from their Defense Department research days, the Internet and its ancestors have thrived on providing essentially neutral channels of communications, with the networks themselves not imposing skewed restraints on the actual applications using their facilities, be they e-mail, file transfers, Web browsing, or newer innovations such as audio/video streaming, and many others. As the Net has become ever more integral to our daily lives, we've come to depend on straightforward access to these services via the many firms of all sizes that currently provide them. But to the telcos and their ilk, neutral transmission isn't a big enough profit center. They want a cut of everybody's action, as exemplified when AT&T's CEO Edward Whitacre made his infamous swipe at Google and other major Internet services, claiming that they were using "his pipes" for free. This is utter fallacy and the anti-neutrality folks know it. We're all already paying for our Internet access. Google pays for their connections -- undoubtedly not small change either. Every small business, every family with an Internet DSL or Internet cable hookup -- we all of us are already feeding money into the telecom company coffers. Even if we choose to use VoIP phone services, we're still paying the phone or cable company for the underlying Internet circuits. The technical term for most of the anti-neutrality argument is simply greed. The telecom providers have watched business models shift around them, and now, true to their roots, are looking for ways to strangle the competition, no matter how skillfully their PR machinery attempts to obscure this fundamental truth. And if that sounds too strong, let's keep in mind that the telecom landscape is littered with the broken promises and unfair tactics of the dominant telephone companies in particular -- promised broadband rollouts never delivered, "cherry-picking" of advanced services only to the most lucrative neighborhoods, rates gone wild as soon as regulatory scrutiny is lifted, and so on. These guys are virtually the textbook definition of predatory practices. No wonder that it's so difficult to believe them now, and why so many observers feel that laws mandating neutrality -- today, before neutrality slips away -- are the only practical approach to maintaining Internet fairness. Anti-network neutrality forces have suggested that since a large and powerful firm like Google has taken a strong pro-neutrality stance, that somehow this invalidates pro-neutrality arguments. To be sure, Google has a financial interest in the outcome, but so do the rest of us as well. In the sort of non-neutral Internet world of which the telecom providers dream, it's questionable that Google, Vonage, eBay, or many other household Internet names could even have afforded to really get started in the first place. A non-neutral Net would likely be a death knell for a whole future of competitive Internet entrepreneurs who might otherwise have brought us a vast range of useful new services. It's very much start-up and other small to medium-sized businesses that are most at risk if a non-neutral Internet regime takes hold. Ironically, neutrality is one of those aspects of the Net that is so taken for granted that it seems invisible and intrinsic. But the everyone pay-through-the-nose environment that would be the logical, ultimate outcome of anti-neutrality wins would make very clear how drastically such invisible attributes have been critical to the Internet's success to date. It's unfortunate that the network neutrality controversy has escalated to an emotional level, which indeed can sometimes obscure the underlying facts. But the reality is that this is truly important stuff, and most Internet users don't realize how drastically and negatively they could be affected if anti-neutrality arguments hold sway. Getting true network neutrality back after it's been lost is likely to be effectively impossible. Except for the anti-neutrality cadre themselves, we'd all be worse off with a non-neutral Internet. --Lauren-- |
Greetings. As noted today in a Washington Post article, the growing problem of "click fraud" increasingly threatens the confidence powering the economic engine at Google and other firms that depend on consumer click-through activities for significant portions of their revenue stream. Google says that less than 10% of ad clicks through their system are illegitimate. Taking that at face value, it can still add up to an awful lot of fraud given the total amount of clicking going on out on the Web these days. And of course, when someone says that fraud is at such and such a level, what they really mean is detected fraud. Undetected fraud, like other proverbial "perfect crimes," doesn't show up in your stats since you don't know about it in the first place. Google and other ad-based services are deploying ever more sophisticated computational mechanisms in their efforts to detect click fraud whenever possible, primarily by analyzing click patterns, click sources, and related metrics. I suspect that in the long run this approach will prove to be insufficient for the problem at hand. The reason is that click fraud represents what I call a "Turing-Plus" problem. That is, when it comes to detecting click fraud, it isn't enough to know that you're dealing with a human rather than an automated clicking system -- since click fraud is increasingly performed by paid human agents. Rather, what we really want is to telepathically "look inside" the clickers' heads to determine if they legitimately have interest in the product or service that they're clicking on. Given the extremely limited availability of telepathic Web services these days, less esoteric techniques are still theoretically available. For example, we could implement mass surveillance of individual user Web viewing activities on a relatively gigantic scale, with broad data integration and correlation to yield higher-level patterns of user behavior. It is possible to postulate ways in which this sort of data could indeed be used to minimize click fraud by maximizing our knowledge of each user's detailed behavior on the Web -- but the obvious negative privacy implications would be enormous. Given these realities, what could be done right now to reduce click fraud without introducing serious collateral damage to privacy and other consumer concerns? One possibility would be to move toward a "two-step" process for reaching clicked ads. Rather than being taken immediately to a selected ad, the user would need to perform some action that would ot only help to affirm the probability of their being human, but that also would slow the click-through process. Slow this sequence down sufficiently, and you may significantly reduce the economic viability of those entities who are increasingly major "warpers" of the click-through model (by paying their agents specifically to click on particular ads). There are indeed some problems with this two-step approach. We don't know by how much human click agents would need to be slowed to significantly reduce the impact of their operations, or to what extent such organizations could expand their agent base in an attempt to compensate. The exact mechanisms provided for the two-step process could be critical, both to effectiveness and longevity. Variations on current "CAPTCHA" technology (the sometimes badly implemented "Turing Test" systems that display alphanumeric image sequences for the user to type back as "confirmation") may have some merit, though they also have significant limitations. Other approaches would also need to be investigated and developed. An even more obvious problem with any multi-stage click approach is that many advertisers may object to any methodology that appears to insert temporal or other barriers between consumers and the ease with which they can reach an ad. But would all advertisers feel this way, especially when the alternative is likely to be increasing levels of undetected click fraud that they ultimately pay for? At least as an option, I suspect that a well-designed two-step click-through process could become increasingly attractive. Ultimately though, it's hard to see how the current single-stage click-through environment can be successfully leveraged indefinitely for the benefit of advertisers and service providers, in the face of the globally dispersed resources available to perpetrator of click fraud. Sooner or later, something's got to give. --Lauren-- |
Greetings. In the wake of Google's seemingly pricey acquisition of YouTube, some observers have suggested that Google is on its way toward effectively monopolistic control over many Internet services. The question of who is or isn't a "monopoly" is interesting, but trying to use standard definitions may lead us somewhat astray from key issues. First, Google isn't currently a monopoly by any classic definition and is unlikely to become one in a general sense. Google clearly isn't a utility in the vein of residential water, electricity, or natural gas delivery, which despite attempts to introduce "competition" still are obvious "natural" monopolies. Google isn't even "oligopolistic" in any broad sense, unlike conventional phone services or "last mile" ISPs, where only limited practical consumer choice exists. And Google obviously has never had the leverage that Microsoft has with consumers who buy PCs, since for the vast majority of PC users there is no simple turnkey alternative to the Windows systems that come bundled on their computers. Many consumers don't even realize that there is a difference between the hardware and OS of their PCs. Google is different. It can be argued -- with considerable validity -- that Google can't become a "monopoly" for most Internet applications since consumers can easily switch away from Google to other services, with little or no monetary cost associated with such a choice -- just enter a different URL. At least, this is the case so long as Net Neutrality (which Google strongly and admirably supports) is the basis of the Internet. But this is all only part of the equation. The ability to choose alternate services depends upon the availability of the desired information on those services. When we're only talking about conventional Web search engines, this is not a big issue -- there are lots of such search services, and I've never yet seen a conventional Web site that limits search spidering only to Google. The fact that Google has become the dominant search engine is a tribute to the quality of their product, not to nefarious manipulations. Even Google's YouTube acquisition doesn't automatically prevent other entities from competing in various ways in the user-contributed video space. We're likely to still end up with more competition in that area than, say, in national radio networks and other aspects of conventional media. The bigger issues come into focus in two aspects -- "exclusivity" deals and data retention. To the extent that Google or other entities engage in contracts (such as Google's Book Search scanning deal with the University of California) that restrict the ways in which those scans can then be used by others, most consumers will inevitably be steered toward specific firms if they wish to have access to that data. Now, it takes two to tango, and in this case it would seem obvious that Google would reasonably wish to retain leverage over these materials. It's far less clear that it is appropriate for UC to accept such terms as a public institution. However you might wish to parse the motives, the end result is that consumers' choices can be significantly limited in these kinds of situations, and of course as the data corpus grows the tendency for further concentration of these materials becomes ever greater. This leads us inevitably toward the data retention issue. To the extent that consumer choice becomes limited by the structural and/or financial tapestry of these systems, the responsibility of the dominant players (e.g. Google) when it comes to data collected from their users is increasingly paramount. This is the case regardless of whether or not the term "monopoly" or even "oligopoly" can be reasonably applied to such firms. I won't get into specific data retention arguments here (please reference my "Open Letter to Google" and related essays for more details on this topic). The executive summary is pretty simple. Issues and problems related to these kinds of Internet services are unlike those in any other field. The associated risks (whether the result of benevolent intent with unintended consequences, impartial structural evolution, or other factors) are not obviously delineated by classical competitive categorizations. Or to use the vernacular -- just because you're a genuinely good guy doesn't mean that what you've built can't be utilized by others in negative ways that you never intended. With size and power comes the responsibility for helping to ensure that such outside forces can't do evil with your magic. --Lauren-- |
Greetings. Back in June when I first revealed the "phone home" behavior of Microsoft's Windows XP "Genuine Advantage" (WGA) anti-piracy system, a number of people asked me a very provocative question. To wit, did I believe that Microsoft would use such facilities to actually disable XP systems that they felt might be pirated? My answer at the time was that given the significant potential for massive disruption of important applications, the certain intense backlash, and the lawyers' litigation bonanza that would likely result from such actions, it seemed that such a course would be an extremely unwise move. Well, that was XP. Now comes Vista, the next generation of Microsoft Windows. And here's the executive summary, based on my best information to date: With Windows Vista, MS is dramatically escalating their anti-piracy methodology, and even more deeply attaching users at the hip to MS' ongoing supervision. The days of simply buying Windows software go by the wayside -- from now on Microsoft is your permanent "overseer" when it comes to computing policies. In particular, Vista will include technologies that can be used by MS to drastically reduce the functionality of systems that they believe to be pirated. This restricted environment will give users a choice between running Windows in its very limited "Safe Mode" (apparently with networking disabled), or alternatively running a Web browser that will exit automatically after each hour or so of usage. If you want your full system functionality back after being tagged as running a possibly pirated system, you will need to prove yourself to Microsoft. The implications are enormous. This situation doesn't come to me as a complete surprise. In the months since my public discussions regarding WGA, I've been in cordial communications with a key executive at Microsoft within their anti-piracy unit. During these conversations, I did my best -- unsuccessfully -- to dissuade them from taking this kind of course. Unfortunately, my leverage with Microsoft is something akin to sticking a small, blunt toothpick into Godzilla's big toe in an attempt to get his attention. MS' view -- as I understand it -- is that so long as people get sufficient warning before their systems are hobbled (the hobble plans include a 30-day cycle of escalating alerts before the actual drastic capability downgrade is initiated), people will come to accept this "new world order" for a new operating system. That's interesting theory. But the real world isn't such a tidy place -- the WGA experience showed that dramatically. Even if a small percentage of systems are flagged as possible pirated systems, on an absolute basis we're still talking about very large numbers of systems and users. We already know that an array of issues (details in my WGA thread) can cause false positives marking someone as a possible pirate. Microsoft asserts that they make it relatively easy to demonstrate your lack of culpability if you happen to end up with a non-authentic or pirated copy of Windows. Even if we accept this for the sake of the argument, the hassles and potential disruptions in these situations are extremely substantial. Even worse, the specter of hackers possibly gaining control over system hobbling mechanisms is straight out of the darkest scenes of science fiction. Low probability? Couldn't happen? Do you really want to bet your computerized life on that? What a challenge for the hacking community! As I've often noted, software piracy is indeed a major problem. MS is within their rights to implement reasonable means to control it. But by any definition, their plan for active system hobbling is unacceptable and beyond the pale. A month's worth of warnings, or even six months or a year -- won't ameliorate the negatives of this scheme. Too many people ignore such announcements. They're bombarded with warnings already from all sorts of software, and it's easy to assume that even repeated ones are in error and don't really apply to them. It's amazing what people will just ignore and put up with when they believe that they're in the right, or that it's really somebody else's problem in the corporate hierarchy. Then we have the issue of "embedded" systems. Innumerable applications, some of them in extremely important venues (government operations, medical offices and hospitals, plus a practically endless list of others) use Windows systems that are buried in boxes and closets, configured in ways such that no human being would ever even normally see the desktop warnings that would precede an MS ordered hobbling of an underlying Vista operating system. In many such cases, there'd be no indication of a problem until the application just stopped working -- boom -- and wouldn't start up again normally. Microsoft does suggest that most or all applications running as Windows system services would probably in general not be hobbled from normal operations (at least until a user chose to login to the desktop "safe mode" option, then there may be networking issues -- this isn't entirely clear), since the primary target of this "downgrading" mechanism appears to be the desktop and more typical user applications. But this does mean that an immense range and number of applications would indeed be affected. MS notes that there are two basic ways that a system might be flagged as a possible pirate (exact policies for this to be tuned on an ongoing basis). First, a system that has had major hardware changes might trigger the authentication alert and start the 30-day warning countdown to the hobbled state (it should be mentioned that several highly promoted but largely cosmetic new features in Vista are immediately disabled once the warning sequence begins). The second way to possibly find yourself marked as a pirate will be related to connecting with various Microsoft Web sites either on an automatically scheduled or a manual (e.g. Web browser) basis. It is through this vector that Microsoft would send the "begin hobbling warning sequence" command to systems whose license keys had found their way onto MS' suspected piracy database. The set of Microsoft sites enabled for this capability will likely start relatively small (e.g., particular download and system validation sites) and expand over time to a broader range of Microsoft download and update sites. This second scenario suggests that embedded systems with Internet connections that were permitted to contact Microsoft could be at possible risk for hobbling, even if their hardware configurations were left completely untouched for long periods. In any case, the entire concept of significantly reducing the functionality of running systems is saturated with risks. Microsoft notes that their products aren't supposed to be used for "critical" types of applications. That's a fine sentiment, but Microsoft has succeeded all too well in getting developers to use their operating systems in all manner of exceptionally important applications. That's the reality. Of course, just because Microsoft says that they can use such drastic anti-piracy measures in any particular situation, doesn't mean that they necessarily will, but can we really afford to take that chance? Even if you are the most 100% squeaky-clean human on planet Earth, and would never even dream of running pirated software, you may want to think twice (or more than twice) before jumping into bed with Microsoft on this one. And to Microsoft, I again urge you to reconsider this path. By pushing the anti-piracy envelope so far into dangerous territory, you are ultimately undermining your own valid concerns about piracy, and potentially damaging not only computer users, but yourselves as well. That doesn't help you, it doesn't help your customers, and it does a grave disservice to the entire global community that works directly or indirectly with your products. Piracy is real, but you can do so much better. --Lauren-- |