Greetings. Several groups have just proposed to the FTC that a "Do Not Track" Internet list be created, somewhat in the vein of the "Do Not Call" phone solicitation "block" list.
While it's way too early at this stage to make definitive statements about the details of such a concept, you might be surprised to hear that my initial reaction to the idea is running significantly toward the negative.
Given my long-standing concerns regarding what I call "data creep" and diffusion of collected user transactional data into increasingly disparate "domains" of use, one might expect me to be enthusiastic about such a list. I'm not.
I'll have lots more to say about this if the plan seems to gather any traction, but my gut feeling is that the concept is ripe both for the "be careful what you wish for" and the "law of unintended consequences" booby prizes.
I touched on some of this back in September in Blocking Web Ads -- And Paying the Piper, to the extent that if we're really ready to fundamentally restrict the advertising basis of most Web services today, we'd better be ready to pony up the bucks to pay for Internet-based services that we now get for free. I'm not convinced that most people really want to go in that direction. I know that I don't.
Behavioral tracking can definitely be abused. But frankly, the concept of a massive government-mandated opt-out list applied to that space gives me cold shivers. I can offhand think of a bunch of ways that such a plan -- for a variety of technical reasons -- could essentially blow up and make matters far worse rather than better, and that's not even taking into account the potentially derailed business models, resulting realignments, and out-of-pocket payments that would newly become the burden of consumers.
Perhaps the push for "Do Not Track" will serve as a wake-up call for Internet firms, reminding them that they need to be more proactive in terms of self-limiting their collection and use of such data -- to applications that will not be perceived as abusive -- or else risk the government moving in and throwing a massive monkey-wrench into their operations.
Overall, I'd much rather see the industry seriously self-regulate this area, because all else being equal -- to horribly mangle a classic movie line -- "We don't need more stinkin' lists!"
And that's the truth.
Greetings. I was watching the fine old Vincent Price / Roger Corman / Richard Matheson 1961 film collaboration Pit and the Pendulum last night, and somehow my thoughts kept coming back to the current controversies over the Bush administration and torture.
While the movie strayed far from Poe's original work, it still reminds one that it doesn't take an advanced degree in fingernail pulling to recognize torture when you see it.
Yet we are today faced with the sorry spectacle of Bush's nominee for Attorney General, Michael B. Mukasey, refusing to show enough guts to admit that waterboarding -- simulated drowning -- constitutes torture. He says it's repugnant... but won't go farther than that, and won't assert that it's illegal under U.S. law.
Let' s be very clear about this. The technique dates back at least to the Spanish Inquisition (speaking of pits and pendulums, indeed) and has long been prosecuted as torture in U.S. military courts -- since the Spanish-American war in fact.
The kind of hairsplitting, seemingly amoral waffling being demonstrated by the AG nominee suggests that he would fit in well with the Bush administration, but that he's unsuitable to serve the people of this nation as Attorney General.
You really need only ask one question to put these kinds of torture issues into perspective. What would the U.S. reaction be if we learned that captured U.S. prisoners were being subjected to similar "enhanced interrogation techniques" at the hands of our adversaries? Would we say "all's fair in love and war?" -- or would we be screaming "bloody right that's torture, you animals!"
You know the answer as well as I do. Case closed.
Greetings. Man, those amusing guys at the USPTO are still busily approving inane technology patents. It may be tougher these days to patent perpetual motion machines (you have to provide a working model now, apparently) but when it comes to the Internet the sky's the limit!
Amazon has just been granted a patent for placing a search string after the domain part of a URL. Say what?
It seems that the supposedly "unique" part of this "invention" is the concept of having the search string follow the specified host domain without any prefix characters. So, instead of searching for foo with:
or something similar, the Amazon patent appears to cover the case of:
where foo in this example, presumably not matching any direct path on the site, is taken as a search string since it does not start with a predefined character prefix sequence, which if present would indicate a non-search string.
Wow! Do the patent examiners really read those applications any more, or are patents being granted based on the Captain Peter Peachfuzz "dartboard" weather forecasting technique?
Greetings. A few days ago, I reported on apparent tampering with Internet traffic by Comcast.
Caught with their pants down by Associated Press traffic tests, Comcast has now admitted to what they call "delaying" traffic -- in this case a euphemism for what network engineers would call the spoofing or forging of traffic to disrupt the expected normal communications protocol sequences.
For those of us who have long been concerned about assuring Network Neutrality, this case represents but the tip of the iceberg in terms of the sorts of potential likely abuses that we can expect from the unfettered ISPs.
If ISPs have problems with the traffic generated by a particular class of applications, the correct approach would be to work with the network community toward solutions, not acting like an incompetent hacker who could care less about disrupting network protocols designed with considerable effort over decades.
This case isn't just a Network Neutrality smoking gun, it's a veritable bazooka -- and it's aimed right at ISP customers -- that's you and me.
Good luck getting out of the line of fire.
Update: After the publication yesterday of the story described below by the New Times, all charges in the case were dropped this afternoon, and presumably the subpoenas involved are now nullified. However, the enormous temptation represented by detailed, non-anonymized Web logs and other server-based user data for unscrupulous outside demands and subsequent abuse by outside parties remains ever present and is generally becoming even more acute as server-based applications proliferate.
Greetings. I and many others have long warned how Web logs could be abused by overzealous law enforcement. Now comes a prime example that is sweeping in its scope.
The Phoenix New Times has broken its silence regarding August grand jury subpoenas it received, which demanded essentially total access to all information in its Web site logs, including explicitly all information about every single visitor to the site, beginning in January 2004.
How this will all turn out is unclear as of now. The link above discusses the particulars of the case in detail, and I won't attempt to summarize it all here. But this is definitely a case worth following, especially if you still are of the belief that such outside abuse of logged Web data on a large scale is merely a theoretical concept.
If you run Web sites, please remember this and please remember it well: Any information that you collect regarding visitors to your sites can be demanded by a court or other officials for any reason that they can push through the legal system (in the light of day or in secret) -- your published privacy policies be damned.
All it may require is one serious breach of privacy trust with your users to shake their confidence in your operations forever, and even for the largest and most powerful of Web services, users can desert you at the flick of a URL.
Trust is at the heart of users' relationships with the Web services that they patronize. If such services put themselves in a position where they and their users may be victimized by unreasonable outside demands for log and other related data, the risks to the Web and Internet at large are immense indeed.
Greetings. According to an Associated Press item replete with an unusual amount of technical details, AP testing of file sharing, using a controversial media file known as "The Bible" (yes, that Bible), has strongly suggested that Comcast is using blocking technologies to impede file sharing even of limited size, completely copyright-free materials. You can read the details yourself.
What's particularly noteworthy about this story is that Comcast is so far refusing to explain what's going on -- further fueling suspicions that something far more insidious than unintentional technical glitches are involved. In fact, the sporadic nature of the reported results suggests that the particular (in this case, religious!) content wasn't specifically the target of "traffic shaping" or blocking, but rather that certain classes of traffic in general are being distorted in various ways by this ISP, probably by technologies that Comcast insists they "rarely reveal" for various reasons. Uh huh.
It also demonstrates how end users can usefully contribute toward a better understanding of the state of the network and operational network neutrality (please see: Proposal for Breaking the Internet Network Neutrality Deadlock for more on this aspect).
Additional information and clarifications regarding the Comcast situation in particular would of course be welcome.
Blog Update (October 23, 2007): Comcast Admits Interfering with Internet Traffic
Greetings. As I mentioned yesterday, the YouTube/Google Digital Rights Management (DRM) video "fingerprinting" system (called "YouTube Video Identification") is now operational.
Given the vast scale of YouTube, this is taking video fingerprinting -- and DRM in general -- into an entirely new deployment realm, and it should be fascinating to see how well it works in practice on YouTube.
In furtherance of a better public understanding of this process, I've set up a form for the reporting of perceived false positives from the YouTube fingerprinting system.
I'd appreciate it greatly if persons who feel that their submitted videos have been inappropriately or incorrectly flagged as "forbidden" by the system would use the form to let me know as many particulars as they feel comfortable providing.
Details are at the PFIR YouTube "False Positive" Reporting Form itself.
Thanks very much.
Greetings. Just to update, Democrats in Congress continue their disgraceful march toward selling the American people down the river, by rolling over and playing dead for the Bush administration on the wiretapping bill.
I expected such attitudes and games from the GOP of course, but the Democrats' performance on this matter is abysmal, even given that their majority in the Senate is very thin.
Not only are the civil liberties and basic rights of Americans being put at risk by these Congressional actions, in a manner more fitting of the old Soviet Union than the United States, but Congress seems ready to grant retroactive immunity to the telephone giants for their blatantly illegal handing over of customer records and related clearly illicit activities.
Crafting a wiretapping bill that would be effective against foreign targets without decimating American rights is certainly possible. But Congress doesn't have the fortitude or ethical compass for protecting our rights any more. Just say the word terrorist and everything else goes out the window in a blast of political oneupmanship.
I do hope that you'll remember Congress' actions in this regard come the next election.
Greetings. Here we go again. FCC Chairman Kevin Martin wants to move in the next couple of months to loosen restrictions on media ownership. When the previous chairman, Michael K. Powell, tried this several years ago, it bounced back in the Commission's face from the courts.
I've gotta admit it, I just can't get worked up with sympathy for the giant media companies' claimed desperate needs to get even bigger than they are now. For some stubborn reason, I keep thinking that we should be moving in exactly the opposite direction!
I guess I just can't see the big picture. Or perhaps I just don't really understand the true dimensions of world class greed in action.
Greetings. Google? Telecom? Big Brother Government? No, not in this posting. It's time for a Puppy Break! Resistance is futile.
A friend of mine found an unidentified stray dog on the street last Friday afternoon, desperately hungry and very pregnant. When I saw it Sunday afternoon, I declared (using my total absence of formal veterinary training to its utmost) that puppies could be arriving at any time.
The next morning it began. 1 ... 5 ... 9 ... 12 ... 15! All alive and kicking except one who was apparently stillborn.
And below you can see the beaming mother and children.
Of course, these turned out to be the proverbial lucky dogs -- if timing had been a bit different on Friday, she would have probably had her pups in a cold, damp alley. And if people would take responsibility for their pets in the first place, we wouldn't be faced with so many desperate animals wandering the streets.
But if you live around L.A. and are interested in a beautiful puppy or two a couple of months from now, you know how to reach me!
Greetings. About a month ago, I briefly discussed the issues surrounding the "fingerprinting" of media content by sites in an attempt to find and remove copyrighted works posted without the rights holders' permissions.
As I implied then, this is a complex issue, which if handled incorrectly could trigger major backlashes.
We're about to see an example of how this plays out in the real world, as word comes that YouTube (Google) has activated their content fingerprinting control system (called "YouTube Video Identification").
Google appears to be well within its rights in deploying this technology. After all, it's Google's responsibility and mandate to determine which materials submitted by users will be hosted on Google-owned systems , and the associated notification/dispute regime appears to be well thought out, at least in theory.
However, it will be fascinating to see what the content matching parameters really are, and how easily they can be thwarted. We can be sure that folks are already uploading various carefully designed videos to probe these very facets.
I've argued that in the long run, DRM is doomed. But for now, there will be plenty of casualities and collateral damage on all sides of this issue.
In the current case, there is likely to be an ongoing battle of wits and skill (regarding content fingerprinting) between YouTube and certain of its users for quite some time to come. This will be but one aspect of the much broader and unfortunately inevitable DRM wars.
Interesting times, indeed.
Greetings. Now it becomes crystal clear why the phone companies have been begging to be indemnified for past participation in illegal wiretapping and subscriber transactional data disclosures -- it's obviously been going on massively for years -- as many of us have long suspected.
Today's Washington Post explores Verizon's admission that they've been handing over customer calling data without court orders for ages. Perhaps even more interesting is the news that the feds had also wanted the numbers being called by the people called by the targets of interest. That is, if person A was the target, and he called entity B (which might be a person or a business, of course), investigators also wanted the lists of everyone being called by entity B. When you work out the math, this is an utterly astounding way to drag vast numbers of innocent persons into such investigations. Verizon claimed not to have the necessary data to provide this secondary "community of interest" data, but the very fact that the government requested it speaks volumes. And who knows what was going on with AT&T?
Actually, we do seem to know a bit more about Qwest and AT&T now -- though both are currently refusing to answer Congressional queries about their participation in the various illegal programs. Their defense? "The government prohibits us from telling you the truth." Gotta love those guys.
Now it turns out that according to the former CEO of Qwest and other sources, the feds were busily laying the groundwork for these illicit operations months before 9/11. Fascinating.
In light of all this, I can't help but wonder if my thoughts last year about How to Tap Every Phone Call in the Country might be far less speculative than I thought at the time. We have more evidence than ever that the sensibilities behind interactions between government and telcos might have encouraged such an approach.
Do I really think that wiretapping on such a scale is going on? No, I don't. But what's disturbing is that I believe that the federal government -- our federal government -- would not be unwilling to explore such an approach.
That's pretty scary, in and of itself.
Greetings. The Transportation Security Agency is now testing its latest "security" toy at various airports, another full body scanner (a millimeter wave device this time) that takes detailed naked photos right through passengers' clothes. And naturally, we're hearing the usual promises that there's no privacy invasion. The images aren't saved says TSA. The personnel viewing the images aren't at the security checkpoints, and faces are obscured, we're assured.
Sorry gang. TSA can promise whatever they want, but these sorts of privacy claims are utterly worthless. We have no way to know what the machines are really doing in terms of storing or transmitting even unaltered images. We've heard the same kinds of promises before regarding law enforcement use of CCTV systems, and they've been turned into voyeuristic tools nonetheless.
Having the viewer of the images at a remote location doesn't prevent abuse. All it takes is a cell phone camera and it will be no time at all before copies of the more "entertaining" body images (even if theoretically "anonymous") are pasted up in people's offices, homes, or circulating on the Net.
The abuse is real even if the naked images aren't labeled by name. And odds are that enterprising folks will find ways to associate names and images somehow as well -- the market for celebrity-related goodies is so intense that this is bound to happen. These body image scanners, like others that TSA has been experimenting with, are just another privacy abomination masquerading as security. The only good thing about this new one is that unlike TSA's other pet peeping tom body scanner, it's not using x-rays. Whoopee.
Greetings. Ready to turn over the keys of your vehicle to the cops, or that clever hacker in the next lane? How about that creepy guy following you on a lonely country road?
GM apparently plans to perhaps make this all possible. It's been announced that they'll be equipping nearly two million of their 2009 model vehicles (that have OnStar installed), with the capability to be remotely shut down to idle via OnStar commands at the request of law enforcement.
This new capability will also create an irresistible challenge to the hacker community -- and perhaps criminal organizations -- to try find ways into the OnStar system for triggering this fun -- one way or another. It's impossible to hack OnStar? Would you bet your life on that?
Unfortunately, this is yet another laudable idea that's being "driven" into the marketplace before all of the negative ramifications have been thought through or fully understood. And how long will it be before such systems are mandated, one might wonder?
OnStar has long been the subject of various privacy concerns. This new capability appears to be the most serious privacy-related issue for OnStar to date.
Greetings. It looks like Microsoft may already have some significant quality problems with their heavily hyped HealthVault.
I received an e-mail last night from a reader who was disgusted to find that some completely valid queries to the HealthVault search engine -- mentioning bodily parts or bodily functions -- returned extremely high percentages (sometimes almost 100%) of porn keyword "sucker" pages (porn pages that have been "seeded" with all manner of likely keywords). I won't offer example search strings here in the interests of good taste, but I've confirmed this situation myself.
In fact, this person noted getting masses of porn results starting with their very first HealthVault search. They were stunned that Microsoft's quality control and presumed filtering of results for health relevance were so defective on a highly touted health-specific search engine deployed for the general public. I agree.
For comparison purposes, a test of the same searches on Google also yielded a lot of porn hits, but overall more relevant hits were returned, and Google isn't promoting their main search engine as having a health focus.
There is a potential bright side to this situation. I'm all in favor of using encryption whenever possible on the Net, and HealthVault uses SSL crypto for searches in both directions. So finally there's a way to search for porn on the Net with better privacy!
All Microsoft needs to do now is simply rebrand their service as "PornVault" -- now that's a winner.
Greetings. According to the New York Times, Democrats in Congress -- still terrified of looking soft if they don't give the Bush administration essentially everything it wants to spy on communications, including possibly indemnifying the telcos for their illegal wiretapping already performed -- are apparently ready to roll over and sell some of our most precious standards down the river. Gutless and spineless are the most polite descriptions I can come up with right now.
Greetings. VHS is dead. Its ghost lingers in our homes and in cobweb-filled corners of electronics retailers, but make no mistake, VHS recording is rapidly going the way of the dodo. And this passing is being used as an excuse for one of the biggest consumer ripoffs in technology history -- with our friendly neighborhood cable television services (in their various incarnations) chuckling mightily at the situation.
When we first started hearing about Digital Rights Management (DRM) systems planned for digital television, there was a great deal of concern, even though the planned focus appeared to be on "premium" programming (HBO, Showtime, Pay Per View - PPV, and so on). Much of this seemed rather academic anyway, since consumer devices that would be affected by such systems were still largely vaporware.
But that situation has changed rapidly, and now cable firms (and their fiber, satellite, IPTV, and other variations -- I'm calling them all cable) have got their subscribers by the you know what, and unless the FCC (fat chance) or Congress (perhaps a better chance) get moving, consumers will see their hard won rights to record and save television programming fade into history. It's happening right now.
The Supreme Court "Betamax" decision decades ago established the fair use rights of consumers to make copies of television programs, and save them on videocassettes. But with the demise of VHS, the newly ascendent technology is Digital Video Recorders (DVRs), such as the TiVo and its various cruder generic cousins (the latter typically cable company supplied).
DVRs allow saving of programs on their internal hard drives, but there's a problem. Video takes a lot of bits, and hard drive space is limited. So the trend now is to find ways for consumers to save programs to external media and devices (such as DVDs or PCs), much as they could with VHS tapes. Direct DVD recorders are appearing, as are newer generation TiVos that will shortly have the capability enabled to move programs to PCs and then write them to DVDs.
But many cable firms are trying to thwart these capabilities via DRM, trying to turn back the clock to pre-Betamax days. Their magic wand for this purpose is the Copy Control Information (CCI) byte, transmitted as part of digital cable channels, which impacts any modern device that interfaces directly to a cable system (e.g., through cableCARDS like with the newest TiVo HD -- and many more devices so affected are now appearing).
Set CCI=0x00, and the consumer can dump programs off of their DVRs. Set it to 0x02, and the programs are locked down. The device manufacturers must abide by this rule or suffer the wrath of CableLabs -- the cable industry's own version of Dr. Evil's R&D operation.
Given the power that CCI holds over consumers, one would think that there would be concise standards for how it would be applied to programming. Buzzz! Wrong! In fact, the significant regulations that apply to CCI simply require that digital broadcast channels (that is, over-the-air signals retransmitted as digital cable channels), must set CCI=0x00. Beyond that, the regs are essentially silent.
Now, logically one wouldn't be surprised to find cable companies setting CCI=0x02 -- blocking program saving to DVDs, etc. -- for special event programming, PPV, and perhaps even the HBO/Showtime class of premium channels.
What you might not expect to frequently find is cable company ad hoc CCI blocking of essentially all basic digital channels (other than over-the-air) totally on their own volition -- creating unfair recording capability variations around the country.
For example, Time Warner Cable is generally setting CCI=0x02, and blocking dumping of programs from DVRs, in this expansive manner. There's no evidence that all of these programs suppliers have demanded such an action. Many of these channels run Cable in the Classroom programming that is specifically licensed to be recorded, saved, and distributed in schools under various terms. CCI=0x02 can directly block such licensed use. Similarly, it seems unlikely that the various C-SPAN channels would demand blocking of program dump functions, yet Time Warner is routinely setting CCI=0x02 for some or all of these channels as well (which may often appear only on digital tiers) on many TW systems.
Since nothing requires TW to be taking this broad control freak approach to DRM on their digital channels, the most likely explanation would seem to be a CYA mentality run amok, subscribers' rights be damned.
Comcast, on the other hand, has reportedly been trending in the opposite direction, with their systems moving toward CCI=0x00 settings for most digital channels, allowing consumer program dumping and external saving.
Question: What possible valid reason can there be for cable subscribers of one company's systems to have vastly fewer recording rights for the same channels, compared with subscribers of another company's cable systems?
Answer: There's no valid explanation for this disparity. It's wacky, wrong, and just plain unacceptable. And as more consumer devices affected by this craziness rapidly deploy in the marketplace, subscribers are going to go ballistic when they discover that the pricey boxes they've bought have key functionality cut off at the knees by cable company edict in many locations.
If the cable industry was smart, they'd collectively start reversing draconian CCI settings right now, and start universally treating their subscribers as individuals to be appreciated, not chattel to be abused. But absent such an enlightened approach from the industry as a whole, it's likely that we're going to have to make it clear to Congress that when it comes to this sort of DRM abuse (to paraphrase Howard Beale in the 1976 film Network): "We're as mad as hell and we're not going to take this anymore!" -- assuming that our cable companies don't try to block this too, of course.
Greetings. A California appeals court has reinstated an age discrimination lawsuit by Brian Reid against Google.
While I am not in a position to comment on the merits of the suit, Brian is a contemporary of mine going back to early Internet days, and the story itself is interesting reading, as are the comments associated with the story.
Greetings. By now you're probably familiar with the story of the woman who died in a Phoenix airport holding cell, chained to a bench, after having an emotional outburst related to missing a flight.
While there's much still to learn about this case, there are some things we know already that are disturbing.
She was on her way to a substance abuse treatment program, and was unaccompanied (if she hadn't been alone, this whole sad event would probably not have taken place). She may, or may not, have been drinking before the incident. Her husband (whom she apparently called before being arrested) desperately tried to inform airport officials that his wife needed assistance and might have become suicidal.
And we know that the police on scene chained her up in a small, unmonitored, locked room, left her unattended, and found her some minutes later unconscious and then dead, with her shackles implicated as a possible cause or contributing cause of death.
Amazingly, when asked why the room didn't at least have closed circuit camera monitoring so that emotionally disturbed persons wouldn't be left in a vulnerable position -- where they could easily hurt themselves (even fighting with shackles can cause permanent nerve and other injuries) -- the response was that they don't monitor those rooms for privacy reasons.
I find this response laughable. We're constantly told by law enforcement how wonderful it is to have surveillance cameras in every nook and cranny outside our own homes, yet cameras in the police stations to monitor in situations like in Phoenix (or to record interrogations for that matter) are fought tooth and nail by those same authorities. I wonder why? Perhaps it would be inconvenient to have a record of what goes on in those rooms -- and during those interrogations?
What's more, while in the Phoenix case no Taser was used, we've recently seen an increasing number of cases involving Taser abuse (the Taser is the most popular torture device worldwide, since it can be used without leaving obvious marks). Police agencies have been using the Taser as a quickie control technique, shocking people multiple times even after they're down and handcuffed, and the number of people dying after being Tasered is an ever-growing and increasingly alarming list.
There are lots of good cops out there. Unfortunately, it doesn't take a lot of bad eggs to give a black eye to entire departments.
More and more, especially post-9/11, we've seen some officers seemingly using terrorism fears as an excuse for their own personal brand of nonconsensual S&M, with Taser abuse and a fetish for overuse of shiny chains and shackles -- even in cases of persons who obviously were in emotional distress and who could have been handled through less extreme means without harm. And as we've seen in the Phoenix case, the unnecessary results can be fatal.
Law enforcement is often called a thankless task, and the many dedicated and well-trained officers deserve our appreciation and respect. But that's not an excuse for abusive or even simply inept or cavalier tactics. Such displays are dangerous to the public, and damaging to the reputation of law enforcement itself. It's a lose-lose situation that's in all of our best interests to avoid.
Greetings. In response to my discussion of The Online Medical Records Trap, I've been asked what would happen if a central medical records system were encrypted in the manner I suggested, where the service provider couldn't access the records even in the face of an outside demand (like a court order) without the user's permission, in the case of the person being incapacitated or unconscious.
There are several rather simple answers to this. The most basic is that to depend on a centralized system as the only location where medical records are stored would be incredibly foolhardy. If doctors or hospitals needed access to that data, and their local computers or Internet connections were down, or if the central servers had been hacked or were having other problems (including possible connectivity issues) then patients would be S.O.L. (that is, up the creek without a paddle).
It should be required that doctors and hospitals maintain local copies of patient records, ideally not only on their local computers (the same level of encryption and access control that I propose for central medical records systems would not be necessary nor desirable on these local systems), but also the records should be kept in hardcopy form as well.
Yes, I said hardcopy. A hassle that devalues the computerized systems? Yep, but I want my medical records kept locally in a form that doesn't depend on computers or even electricity. I like those manila folders on the shelves, especially living in an area where earthquakes and other natural disasters (with their resulting power outages) are always a possibility. Most other areas also have their own risks of disasters or problems that could make computer-based access to patient records impossible just when they're needed most, especially if those records are centralized and communications are down.
As far as access to a central system is concerned, nothing says that a user couldn't provide friends, next-of-kin, etc. with their access key, or even have it noted on whatever emergency contact information that they hopefully carry routinely. I have a slip of paper in my wallet with a few contact names and numbers for emergency use, mainly in case some idiot wipes me out making a left turn in front of me when I'm riding, but the point is that while carrying around your passwords isn't a great idea in the general case, this is one specific situation where it could make sense.
I should add that it's also wise to include on your contact sheet full information about any allergies or other serious medical conditions that exist so that responders will know about them in emergencies. To depend on access to a centralized medical system for such info in these situations could be disastrous, even if none of the central data were encrypted or otherwise access controlled -- there's no guarantee that the central system would be reachable when you might need it most.
So what does this all boil down to? A centralized medical records system should never be depended upon for anything other than secondary access to medical data, if that. Doctors and hospitals must be required to maintain local copies of patient data since there is no guarantee that central systems will be accessible at any given time, particularly in disaster or other emergency situations.
To help prevent misuse of central medical records systems, all personal medical data on those central systems should only be accessible with the permission of the user or their designated contacts, and should be encrypted in a manner that makes other access impossible. Period. Anything short of this opens up enormous abuse potential.
Greetings. Microsoft is rolling out their centralized medical records project -- with the somewhat misleading name HealthVault -- and it's time for consumers to start paying attention to what's going on in this sector -- Google is working along similar lines as well. (Why do I call the HealthVault moniker misleading? Keep reading.)
There is a vast market assumed for centralized recording of every aspect of your medical life, initially through free accounts where you would input the data yourself, but as quickly as possible the intention is to move toward having doctors, hospitals, pharmacies, and everyone else involved in your medical treatment entering the data directly. The federal government is also a big booster of the centralized medical data idea -- a fact that might be enough to give one pause in and of itself.
The selling points for such projects seem obvious enough. Instant access to your medical data for emergencies or other purposes, ease of seeing test results and (in theory) correcting errors, and so on. All good stuff.
But what's not obvious from the sales pitches are the downsides, and they could be serious indeed.
The term HealthVault is misleading because we know by definition that such services will be anything but a vault when it comes to privacy. You can almost hear the conversations at Microsoft where they tried to come up with a name that gave the impression of security, Fort Knox, and impenetrability. And of course, Microsoft is making all the usual claims about encryption, safety, and the same promises we always hear about centralized data systems.
But the big risk in centralized medical data -- arguably the most personal data about any of us -- isn't about whether the servers can be hacked or the communications eavesdropped (though these are real issues, to be sure).
The most serious problem is that once medical data is in a centralized environment, there are essentially no limits to who can come along with a court order (or in the case of the government, as we know, secret orders or illegal demands that can't usually be resisted) for access to that data. Service providers typically have no choice but to comply. The only way to prevent this is for the data to be encrypted in such a way that even the service provider cannot access it without your permission, even with a court order staring them in the face. As far as I know, none of the systems currently in development or deployment take that approach to encryption -- but I'd love to have someone inform me that such techniques would be used. That would change the equation considerably.
Who might want access to your medical data? Insurance companies obviously, and one might expect them to lobby hard for such access, in the name of "reducing fraud and insurance costs" of course. Many employers would also love to get access, to help weed out medically expensive employees and applicants.
Perhaps more ominously, broad "fishing expeditions" by the government -- for research, investigative, and other purposes -- become far easier when medical records are centralized. It's very difficult to abusively search or gather such data in a broad manner when it consists mainly of manila folders in cabinets at your doctors' offices.
But once this data goes online centrally, it's one of those "bingo!" moments for those who would just love to pry into the medical histories of consumers and citizens.
Frankly, if people want to use such centralized systems voluntarily I have no serious objection. However, my gut feeling is that most people signing up won't have a clue about the negative ramifications of these services -- certainly the services themselves won't be trumpeting such shortcomings and risks.
And worse, over time it seems likely that the service providers -- possibly in conjunction with government agencies at various levels, will move to make such use a default condition (that is, it applies unless you opt-out), and ultimately pressure everyone toward a mandatory approach.
There could be a useful role for such centralized medical records services, but only in an environment of laws and related broad privacy protections that simply don't exist now, and don't appear to be forthcoming anytime soon. In their absence, using centralized medical records services at this time, except in very special and limited circumstances, would appear to be unwise and is not recommended.
Blog Update (October 5, 2007): More Regarding the Online Medical Records Trap
Greetings. It's hard to suppress a knowing sort of smile when reading how a configuration error at the Department of Homeland Security created a flood of over two million messages, with private e-mail ending up going to an entire DHS-related mailing list.
This sort of problem is all too familiar to many mail system administrators, and usually it's just an annoyance. But the scale of this screwup, and the fact that it was associated with DHS, makes it difficult to really chuckle.
What's especially noteworthy about this event is how long it apparently continued flooding people, many of whom might have needed to receive important e-mail -- mostly security-related recipients and their like were reportedly involved.
But just like with so much of the Internet these days, tracking down a responsible party to fix an immediate issue when things go wrong, even in the case of very serious problems, can be extremely difficult, even when the organization that is the source of the trouble is known.
Such situations are all the more problematic if the "event" is being generated from a site with a private WHOIS domain listing that hides their identity and/or direct contact information (these private listings are now aggressively marketed by various registrars, including Network Solutions).
There were no serious repercussions from the DHS mail flood today, but the next time we might not be so lucky.
In an Internet that increasingly operates as a massive number of individual fiefdoms, where even basic information about participants may be hidden or considered to be proprietary, how can we assure that technical problems of significant scope will be addressed rapidly and correctly, before major damage can occur?
Without a significant change in the Net's current evolutionary path, it's hard to see how to avoid having these sorts of problems worsen significantly, and perhaps dangerously, over time.
Greetings. Let's take a brief diversion from policy and technical content to note the passing of a truly imaginative writer, Charles B. Griffith, at age 77.
Griffith was a key screenwriter for one of my favorite filmmakers, low-budget master Roger Corman. Griffith wrote the screenplays for some of Corman's most memorable productions, including Creature From the Haunted Sea, Bucket of Blood, The Wild Angels (some great bikes in that one), It Conquered the World, and many more, including likely most memorably and best known, the original Little Shop of Horrors (in which Griffith also had some cameo roles and provided the voice of the starring man-eating plant -- "FEED ME!").
Corman usually didn't have much of a budget, but what he lacked in dollars he made up for by hiring talented writers and actors who helped create films that not only were (and remain) entertaining in their own right, but helped to launch many major careers behind and in front of the camera. I have a photo taken in my studio with another of Corman's legendary writers -- I really have to post it sometime ...
If you're not familiar with Griffith's films (or Corman's work in general), do yourself a favor and rent some, or buy the inexpensive DVD compilations that are available. Probably for less than what a modern film production can spend on catering, the team of Griffith and Corman brought us film memories that are still part of so many lives today.
Greetings. In Proposal for Breaking the Internet Network Neutrality Deadlock, I recently suggested a project for the gathering and analysis of worldwide Internet traffic data and characteristics, for Network Neutrality-related and other purposes, based on a distributed architecture of processes running mainly on end-user computers.
I've now dubbed this project the "Global Internet Measurement Analysis Array" (GIMAA).
I'd like to now touch very briefly on a few of the many practical considerations that such a project would entail, including deployment, security, and privacy issues.
To be useful, the measurement collection environment requires a very large number of participating end-user sites. While standalone versions of the GIMAA programs will of course be needed for a variety of hardware platforms, deployment could be significantly hastened by including the associated code into other already widely used end-user packages, e.g. popular browser/OS toolbars and/or free utility application bundles. It may even prove possible to primarily use the existing application/toolbar data traffic as the foundational operational corpus for the measurement system itself, supplanted as necessary by purpose-generated measurement-related data.
To the extent that the vendors of such toolbar and application packages are interested in the potential ongoing output of a GIMAA environment, such "packaging" would seem an attractive possible route for dissemination of the system, with the goal of reaching a practical deployment level as quickly as possible.
A range of security and privacy issues accompany a project like GIMAA, some of which will likely be leveraged by some entities into objections against the entire project.
Clearly the GIMAA code modules, measurement payload data, and any associated aggregated data will need to be secure and as protected against manipulation and tampering as current technology will allow. User data on participating systems must be protected as a first priority concern.
A more unique issue with the GIMAA methodology is that the techniques envisioned, if they prove out and are very widely deployed, could be extremely powerful. As such, concerns are sure to be raised that GIMAA may publicly reveal network traffic, topological, vulnerability, and other data that some network participants, and others, might prefer to keep hidden for business, security, or other reasons.
It can be anticipated, for example, that some firms (including ISPs) would become concerned that GIMAA node activity could reveal what they consider to be proprietary aspects of their network topologies, and that attempts to block GIMAA measurement traffic, and/or the writing of prohibitions against such measurement techniques into Terms of Service agreements, would be forthcoming.
Of course, one of the key purposes proposed for GIMAA is to detect vulnerabilities and abuses so that they can be corrected (through technical or policy means, as appropriate), and it would be expected that some of those entities responsible for such conditions would not be enthusiastic about their being so exposed.
I also consider it likely that GIMAA will be criticized from some quarters on national security grounds, with the argument being that the Internet infrastructural data that could be exposed would make attacks on the Internet and its attached systems more effective.
All of these concerns are real, and considerable effort will be needed to balance the benefits and risks associated with a project like GIMAA.
But aside from the more obvious cost/benefit analysis that can be applied to this project, there's another basic reality that renders some of these concerns relatively moot in important respects.
The categories of measurement methodologies proposed for GIMAA could be deployed on a clandestine basis by technologically skilled adversaries, perhaps as part of widely disseminated computer viruses and the like. If GIMAA does not move forward, that doesn't guarantee that "bad guys" won't get access to such data via their own GIMAA-like technologies that could infect systems around the world. Blocking GIMAA would only assure that honest players wouldn't have access to the same sorts of important information.
In my book, it's nonsensical and dangerous to block open and honest use of even potentially sensitive data, while the unscrupulous can likely gain access to similar data via their own means and for their own purposes. Sometimes sunlight really is the best disinfectant, and in the case of the Internet the old paradigm of "security through obscurity" has been widely discredited.
GIMAA, while not without real risks, will hopefully shed some needed light on aspects of the operational Internet that have been in the shadows for far too long, having caused a resulting lack of trust that only more open availability of data in these respects can likely ameliorate.
Thanks as always for your consideration.