Greetings. Wire services are reporting that AT&T has offered a set of new "concessions" to the FCC in hopes of gaining quick approval for AT&T's merger with BellSouth, as the march continues toward resurrecting the "glory" days of the telecom oligarchy.
Many observers appear to feel that these concessions, relating to a number of different areas of contention, will hasten quick FCC approval, perhaps within days.
Unfortunately, from any sort of realistic perspective, the concessions being discussed are laughable in terms of their actual impact. They would have little if any lasting effect, and if history is any guide, will be used as excuses to justify future anticompetitive behaviors.
In particular, AT&T's offer to abide by some Internet "network neutrality principles" for a reported 24 months is utterly worthless. The sorts of abuses that genuine network neutrality legislation is meant to prevent are not a matter of the next two years, they are serious risks that would likely have negative repercussions for decades.
It is crucial that we do not allow these sorts of sucker-bait offers to distract us from the critical need for effective Internet network neutrality legislation at the federal level.
More on this next year. Have a great New Year's Day!
Greetings. As 2006 draws to a close, I wanted to review three issues from this year that are likely to be of considerable note in 2007. One is a bizarre blast from left field (or more precisely "right field"), the next is a pressure cooker data problem that we must resolve soon, and the last demonstrates how anti-piracy efforts can cross the line from reasonable to arrogant and potentially dangerous.
The latter two of these topics may cry out for legislative attention if voluntary approaches continue to be impotent -- and with the new Congress coming into power we may have our best shot of accomplishing something positive on the federal level if legislation indeed becomes necessary.
I realize that many people shudder at the prospect of legislation, fearing that it may make matters worse, that lobbyists will warp beneficial efforts into twisted mutations of intent, and similar concerns. These are indeed real risks, but we're also seeing the increasing risks of allowing important technology issues that affect society at large to be determined solely by corporate entities who -- quite naturally and understandably -- have their own agendas and priorities. Again, I'd prefer to see things done on a voluntary basis, but we may have to bite the bullet and give legislation the old college try.
But onward to the issues ...
OK, what the blazes is Newt's Muzzle? A couple of weeks ago, former Speaker of the House Newt Gingrich started spouting off (first in a speech and just a few days ago on NBC's "Meet the Press") about how useful it would be to censor the Internet. The example he's using (for now) is "jihadist" Web sites, and he'd like a panel of federal judges to decide which sites would be "closed down."
Outside of showing his true colors when it comes to freedom of speech issues, Newt is also displaying a woeful lack of understanding of the Internet and how essentially impossible (and counterproductive) attempts at censorship really are in this environment.
The UK Guardian asked me for an op-ed on this topic, and it went up on their Web site a few days ago as "Can Newt Nix the Net". Rather than my taking much more space discussing the matter here, if you're interested in Newt's thinking (and my views on the Internet censorship topic in this context), please visit that link.
Even though Internet censorship (despite the help of U.S. technology companies that provide systems to foster its deployment) is ineffective, it is still a tremendously counterproductive waste of time, resources, and human creativity, and distorts communications in ways that are both unnecessary and potentially result in dangerous backlashes. This is an issue that will only become more important in 2007 and beyond.
The data retention controversy -- the battle to determine how much data is reasonable for search engines and other entities to maintain on their users -- is becoming ever more a red flag issue. In 2006 alone we saw the specter of the feds going after Google data in DOJ vs. Google, AOL releasing privacy-invasive search keyword lists, and issues of Chinese use of U.S. company Internet records to track dissidents, among other similarly distressing activities.
The concerns in this area go way beyond Google, but as the most powerful player in the Internet search industry, Google has a special responsibility to be a leader, not only by fulfilling their "don't be evil" slogan (and I do believe Google's motives are benign) but also by not creating infrastructures that allow others to do evil. It is in this latter respect that it appears Google "talks the talk" when it comes to concern about how their data could be abused by outsiders, but hasn't "walked the walk" by taking sufficient definitive steps to make such abuse impossible.
Again, I'd prefer that this entire area (industry-wide, not just Google) be dealt with on a voluntary basis. But as I've discussed in detail over at the California Initiative For Internet Privacy and links referenced there, if voluntary approaches don't work we may have to take the next step, either at the California initiative level or -- given the upcoming changes in Congress -- perhaps at the federal legislative level (an option that did not appear reasonably to be on the horizon when I wrote the existing CIFIP essay). While some of my reservations about the California state legislature might apply to Congress as well, it is undeniable that a federal approach to these issues could be far more effective, that is if -- and only if -- we need to choose the legislative course.
This is a complex area, with the competing goals of mandated data destruction to protect users' privacy, and the desires of governments to mandate data retention, continuously at odds. We have a tremendous amount of work to do to reach a reasonable outcome.
There's been a lot of discussion about the anti-piracy features in Microsoft's new "Vista" Windows operating system (see this blog entry for example). I've had a number of very friendly conversations with MS executives regarding the issues surrounding their anti-piracy implementations, and in particular their new ability to functionally "hobble" Vista systems that they believe are pirated.
The more that I've considered this, the increasingly unreasonable and hazardous this functionality appears to be. It turns the assumption of innocence on its head -- you have to take affirmative steps to prove to Microsoft that you're not a pirate if your system appears on their suspect hit list. As we know from Windows XP, there are all sorts of ways that honest consumers can end up with systems that have cloned copies of the OS (often installed by repair depots to replace trashed copies of the original system after disk failures, for example).
Many consumers don't even realize the difference between the hardware and operating system of their computers. Many will ignore the warning messages that MS will send before triggering a system hobble, assuming that the messages don't apply in their cases, or that they're phishing or virus come-ons. The mere existence of the mechanisms to initiate the hobbling may represent an attractive attack vector for destructive hackers, who might well get their jollies by shutting down a few thousand (million?) PCs at a time.
Vast numbers of these computers will be in highly important applications in business, health care, government, and the military. Yes, Microsoft says you're not supposed to use them for critical applications. But we know what the real world looks like, and even the definition of "critical" can be nebulous.
Even more to the point (and this also relates to the data retention issues above) it is extremely problematic to assume that it is even reasonable for individual corporate entities to have total ad hoc, carte blanche authority to make these decisions on their own, decisions that technologically have an enormous and ever increasing impact on individuals and society at large.
I might add that while the new Microsoft anti-piracy systems are of particularly concern, there are other anti-piracy technologies being deployed that carry similar risks, including but not limited to a range of upcoming Digital Rights Management (DRM) systems.
I keep saying "voluntary is best" and I mean it. In all of these topic areas I've discussed, voluntary approaches are always to be preferred. But in our society, a key role of legislation is to help provide mechanisms for "power-sharing" in situations like these, if voluntary and cooperative approaches prove to be failures.
We are all part of this. We can sit on our hands and watch as mute spectators -- or we can get our hands dirty by reaching directly into the innards of the machines -- figuratively speaking -- and helping making sure that these systems serve not only their immediate masters, but also society's requirements as well.
None of this will be trivial, of course. But to quote the great animated philosopher "Super Chicken" -- "You knew the job was dangerous when you took it."
Have a great holiday season, and all the best for 2007.
Take care, all.
Greetings. A New York Times article today explores the problem of Web-based "pop-up" ads being used to artificially inflate Web traffic.
I'd like to point out a potentially much more serious problem related to pop-ups that can access arbitrary Web sites -- they could be used for purposes that could get innocent Web users into major legal problems.
The issue of sites triggering unsolicited access to other sites is not new. In a message over a year ago ("Google's new feature creates another user privacy problem"), I discussed how Google's triggering of top item "prefetch" in returned search results could result in Firefox browsers visiting the referenced site -- and collecting any associated cookies -- without users' knowledge (I also suggested ways to prevent this behavior).
The essential problem is that Web logs that record users' access to sites would record such visits as if they had been voluntarily initiated by those users. If those destinations happen to be sites with various forms of "illicit" materials that could be the subject of government or other investigations that would go digging through associated access logs... well, you can imagine the possible complications.
Google's prefetch behavior is an example of a well-intended feature with unfortunate negative side effects.
On the other hand, the sorts of nefarious pop-ups described in the NYT piece have much greater potential for intentionally serious sorts of damage, since they can be far more flexible and directed than simple Web prefetches, and so could put innocent consumers at even greater risk. They might not only access pages that could get people arrested (perhaps c-porn?), but also download files that could trigger RIAA and/or MPAA "automatic" lawsuits, or any number of other nightmare scenarios.
It's fair to ask why anyone might want to set loose such technical monsters on innocent victims. The simple answer is that there are quite a few people out there who just want to score a point -- to prove that they can do it -- plus of course the sick minds who enjoy watching other people suffer.
If nothing else, this specter is yet another reason to block all pop-ups routinely and to disable browser prefetch as appropriate. Most of all it is a reminder to authorities that just because particular entries are present in subpoenaed Web logs, does not necessarily mean that they are accurate representations of user intent. In many cases you may actually be looking at victims, not perpetrators.
Greetings. A story is making the rounds right now regarding FBI use of cell phones as remote bugs. I originally wrote about this concept in my PRIVACY Forum in 1999 ("Cell Phones Become Instant Bugs!") so the issue is real, but we still need to bring the current saga back down to earth.
This discussion doesn't only relate to "legal" bugs but also to the use of such techniques by illegal clandestine operations, and applies to physically unmodified cell phone hardware (not phones that might have had separate, specialized bugs physically installed within them by third parties).
There is no magic in cell phones. From a transmitting standpoint, they are either on or off. It is true that many phones have an alarm feature that permits them to "wake up" from a seemingly "off" state. However, this is not a universal functionality, even in advanced phones such as PDA cell phones, which now often have a "totally off" mode available as well.
It is also true that some phones can be remotely programmed by the carrier to mask or otherwise change their display and other behaviors in ways that could be used to fool the unwary user. However, this level of remote programmability is another feature that is not universal, though most modern cell phones can be easily programmed with the correct tools if you have physical access to the phones, even briefly.
But remember -- no magic! When cell phones are transmitting -- even as bugs -- certain things are going to happen every time that the alert phone user can often notice.
First, when the phone is operating as a bug, regular calls can't be taking place in almost all cases. A well designed bug program could try to minimize the obviousness of this by quickly dropping the bug call if the phone owner tried to make an outgoing call, or drop the bug connection if an incoming call tried to ring through. But if the bug is up and running, that's the only transmission path that is available on the phone at that time for the vast majority of currently deployed cell phones.
New "3G" phones have the capability of running very high speed data -- in which additional voice channels could be simultaneously transmitted at full speed along with the primary call (conventional GSM data channels -- GPRS/EDGE -- typically block calls while actively transmitting or receiving user data). But this is pretty bleeding-edge stuff for now, and not an issue for the vast majority of current phones, especially in the U.S. at this time.
Of course, if a cell phone is being used as a remote bug, the odds are that the routine conversations through that phone are also being monitored, right? So this "one call at a time" aspect isn't as much of a limitation to bugging as might otherwise be expected.
Want to make sure that your phone is really off? Taking out the battery is a really good bet. Don't worry about the stories of hidden batteries that supposedly can be activated remotely or with special codes. The concept makes no sense in general, and there just isn't room in modern cell phones for additional batteries that could supply more than a tiny bit of added power, if any (e.g., enough juice to keep phones' internal clocks going during main battery replacements, but nowhere near enough to be practical for transmitting purposes).
But if your battery seems to be running out of juice far too early (despite what the phone's battery status display might claim), that might be an indication that your phone is being used to transmit behind your back (though a worn out battery or inaccurate battery status display could also be the culprits).
Another clue that a phone may have been transmitting without your permission is if it seems unexpectedly warm. You've probably noticed how most cell phones heat up, especially on longer calls. This is normal, but if you haven't been on any calls for a while and your cell phone is warm as if long calls were in progress, you have another red flag indication of something odd perhaps going on.
Finally, if you use a GSM phone (like the vast majority of phones around the world, including Cingular and T-Mobile in the U.S.) you have a virtually foolproof way to know if you phone is secretly transmitting in voice mode. You've probably noticed the "buzzing" interference that these phones tend to make in nearby speakers when calls or data transmissions are in progress. A certain amount of periodic routine communications between cell phones and the networks will occur while the phones are powered on -- even when calls are not in progress -- so short bursts of buzzing between calls (and when turning the phones on or off) are normal.
But if you're not on a call, and you hear a continuing rapid buzz-buzz-buzz in nearby speakers that lasts more than a few seconds and gets louder as you approach with your phone, well, the odds are that your phone is busily transmitting, and bugging is a definite possibility. Note that this particular test is much less reliable with non-GSM phones that use CDMA (e.g. Sprint/Verizon phones), since CDMA's technology is less prone to producing easily audible local interference. This strongly suggests that CDMA phones may be preferred for such bugging operations. A variant form of CDMA (called "WCDMA") can be used for the high speed data channel and voice calls on the new 3G GSM backwards-compatible phones. Since additional voice channels could theoretically be encoded onto that data stream as I mentioned above -- which would be harder to detect via interference than an ordinary GSM voice channel -- this is a technology that will bear watching.
Most of this discussion applies to bugging in real time. If "delayed" bugging is acceptable, there is another approach available that would be more difficult to detect -- record ambient audio from the phone mic and store it in the phone's memory in compressed form, then upload it en masse later. Modern phones have plenty of available memory, especially ones with cameras, mp3 capabilities, and the like. The processing requirements of a delayed bug would probably be beyond the capabilities of some low-end phones, but even most entry-level phones are relatively powerful these days.
When the recorded audio was uploaded all of the transmission factors mentioned above would come into play, but since the transmission time would be shorter this would be harder to detect. Probably the biggest giveaway to this type of bugging would be battery drain, which would typically be quite considerable even in a voice-controlled recording (VOX) mode. So, my comments above about unusually poor battery performance would be especially applicable in this case.
The odds of most people being targeted for bugging are quite small. But it's always better to know the technical realities. Don't be paranoid, but be careful.