January 06, 2008

Would You Know if Your ISP Tampered With Your Web Pages?

Greetings. Would you even know if an ISP spied on or tampered with your Web communications?

While encryption is the obvious and most reliable means available to avoid unwanted surveillance or intrusions into the data streams between Web services and their users, it's also clear that pervasive encryption will not be achieved overnight.

In the meantime, we see ISPs apparently moving at full speed toward various data inspection and content modification regimes, and laws to protect Web services and their users from inappropriate or unacceptable ISP actions are being fought tooth and nail by ISPs and their corporate parents.

Some announced concepts, like AT&T's alarming plans to "monitor" Internet communications to find "pirated" content, appear most akin to wiretapping in the telephone realm (would people accept the monitoring of all phone calls in search of any illegal activity? Even given the current telcos/NSA controversies, I would tend to doubt that this would be widely applauded).

Others, like Comcast's unacceptable disruption of P2P traffic, appear to partly be extremely aggressive "traffic management" and partly outright packet forgery in the furtherance of interfering with communications.

And of course, we still have the ongoing Rogers saga, where direct modification of data streams to insert ISP-generated messages or, as suggested by related hardware vendor, advertising, is the order of the day.

Encryption is the only sure approach to deal with the potential for ISP (or other) surveillance on Internet connections, and even encryption will permit a significant degree of traffic analysis in the absence of anonymized proxy architectures.

But in the case of ISP tampering with data streams, is there anything we can do for now -- short of the goal of full-page encryption -- to inform users that their Web communications are being adulterated? Can a Web service be sure that their users are able to see the actual Web pages that are being transmitted -- unmodified by ISPs? Can this be accomplished with the highly desirable attribute of not requiring major server-side modifications to the Web pages themselves?

There are a number of non-trivial issues to consider. First, as we all know, a Web page is frequently composed of many disparate elements, often hosted by a variety of completely different servers under the control of multiple entities. How can we define "a Web page" in a way that takes all of these elements and data sources into account, especially when each user may see not only differing primary text and images, but totally different ads?

Would the amount of real-time data coordination necessary to create and communicate such a single-user page "validation snapshot" be practical, or useful in a relative sense given the amount of work that would be required?

Assuming that we can create such a snapshot, a secure mechanism to immediately transmit this validation data to the user's Web browser would then be necessary, bringing back into the mix the probable need for some encrypted data, albeit of a very small amount as compared with fully encrypted Web pages.

The last step in the validation process would be for the user's Web browser (or a suitable plugin) to alert the viewer in the case of suspected data tampering, along with providing necessary details that would be useful in logging and/or reporting the incident.

I won't get into technical details here on approaches to the nitty-gritty aspects of this concept. I have some ideas on implementation techniques, though I'd much rather see a rapid move toward full encryption.

However, I would certainly be interested in your thoughts regarding this concept of Web page validation and whether or not it might have a useful role to play, particularly to help gather evidence that might be useful in the ongoing network neutrality debates.

Thanks as always.

--Lauren--

Posted by Lauren at January 6, 2008 04:59 PM | Permalink
Twitter: @laurenweinstein
Google+: Lauren Weinstein