Greetings. In a blog posting earlier today (Are You Being Cheated by Digital Cable?) I noted how cable companies can (and do, for various reasons) send customers analog television channels that are masquerading as digital service tier channels, and how difficult it can be to definitively prove this, to get anyone at these firms to admit what's going on, or to correct the situation.
You probably already knew where I'm headed with this. Most of these same cable firms are also major ISPs these days, that millions of people depend on for their Internet service.
The anti-network-neutrality camp, ranging from cable firms to telephone companies, DOJ, FTC, and others, repeatedly recite the tired assertion that there's no evidence of a problem, so there's no need for neutrality regulations.
But setting aside for the moment arguments about the advisability of "preemptive strikes" to avoid problems before they occur, let's ask a much more fundamental question.
Do we currently have the tools and algorithms to detect and demonstrate serious network neutrality violations by ISPs, in a manner that would lead to such violations being actionable and correctable? Or could neutrality violations be slipped into the network just like analog cable channels that pretend to be digital?
ISPs simply blocking sites would be quickly detected and likely to generate high-level complaints quickly, but what of more subtle approaches like "traffic shaping" -- that if used aggressively and inappropriately could certainly be defined as anti-neutrality mechanisms? Would the average ISP subscriber of cable Internet or a telco DSL Internet service have any obvious way to detect whether the slow throughput they were getting with particular sites was just an ordinary result of heavy Internet traffic, vs. their ISP purposely relegating the competitions' packets to the back of the bus, so to speak?
And if a customer did suspect that something nefarious was taking place in this regard, how could they prove it? Calling tier one customer support with suspicions alone is unlikely to do more than waste your time and send your blood pressure sky high.
Without an appropriately broad infrastructure to collect and process metrics associated with network neutrality, it is difficult to understand how anyone can reasonably assert that we would know if and when violations were taking place -- even today.
Solving this dilemma is not a minor undertaking. It may be possible to approach this problem on a highly distributed basis, e.g. through processes running on large numbers of cooperating user PCs to help detect conditions that could be indicative of neutrality violations. Another approach would be for a very large network entity (on the scale of, say, Google) to devote some resources to such metrics and analysis. Most likely some combination of both approaches would be the optimum path.
Without mechanisms and procedures along these lines of some sort, it is nonsensical for anyone to assert that we would necessarily know when neutrality violations were taking place, much less have an effective chain of authority for reporting such violations and obtaining timely amelioration.
More resources need to be put into this research area now, along with creating carefully defined structures to use the fruits of that research in a practical way to help keep the ISPs on the straight and narrow.
Not even the anti-neutrality folks should be able to logically argue against what might be termed a "trust, but verify" approach.