Greetings. In a recent blog entry, I discussed my concerns about Google's new "Click-to-Call" service, especially key issues regarding Google's handling of caller-ID in this service. Now I'd like to propose a specific solution. I completely understand why Google likes their caller-ID feature. It's a cute hack (hack in the positive sense), and in the context of non-abusive use brings some value-added. But I really believe that this is one of those cases where somebody needed to get beyond the "gee-whiz isn't this nifty" factor and consider more carefully how it will be abused, particularly on the large free-access scale that Google provides. Even if the vast majority of the calls are legit, the absolute number of abuses is bound to be high, and it seems certain that innocents will be hurt in significant numbers -- there are a lot of jerks in the world who are going to take advantage of this service to get their jollies or take revenge on businesses that they have a gripe with, etc. However, there is indeed a simple solution in this case. If the caller-ID delivered to both sides of the bridged calls is set to indicate the true source of the calls (i.e., Google) the problem goes away. In fact, caller-ID could be used to further enhance the service by providing a true full point of contact. What I would do is set the caller-ID to display a Google phone number (ideally toll-free) that played a recorded announcement explaining that the call originated from Google Click-to-Call, and noting how to proceed (via a Web page, e-mail address, and/or specific phone number) if you felt that you were being targeted for abuse by a user of that system and wanted to file an associated report. This would be a win-win all around. Google would more rapidly get a handle on abusive users, and the service would be even more consumer friendly. Sometimes there can be a happy ending! --Lauren-- |
Greetings. Google has made available a new "Click-to-Call" service that will automatically connect users to business phone listings found via Google search results. In order for this feature to function, the user must provide their telephone number so that Google can bridge the free call between the business and the user (including long distance calls). An obvious issue with such a service is that there is no reasonable way to validate the user phone number that is provided. Google says that they have mechanisms in place to try avoid repeated prank calls, but the potential for abuse is obvious. Of even greater concern is that Google says that it will manipulate the caller-ID on the calls made to the user-provided number, to match that of the business being called. This is extremely problematic, since it could be used to try to convince a prank target that they were being called directly by the business in question, and so cause that target to direct their anger at the innocent business. In the case of targets who are on do-not-call lists, it is possible to imagine legal action being taken by callers upset that the business in question called them "illegally," though in fact the call had been made by the Google system. Google's explanation for this caller-ID manipulation is that it would be handy to have the called business number in your caller-ID for future calls. That may be true, but the abuse potential is way too high. Caller-ID should never be falsified. I've written many times about how caller-ID can be manipulated to display false or misleading information, why this should be prevented, and how the telcos have shown little interest in fixing caller-ID or informing their customers about the problem (caller-ID is a cash cow for the telcos whether it is accurate or not). Up to now, the typical available avenue for manipulating caller-ID has been pay services that tended to limit the potential for largescale abuse since users are charged for access. Google, by providing a free service that will place calls and manipulate caller-ID, vastly increases the scope of the problem. Scale matters. Google has not vetted this caller-ID feature sufficiently, and I urge its immediate reconsideration. --Lauren-- Blog Update (November 19, 2006): Proposed Solution For Google's "Click-to-Call" Caller-ID Problem |
Greetings. As reported in the Washington Post, the Bush Administration is now apparently asserting that the interrogation techniques that have been used against suspects who were/are held in the "secret" CIA prisons are now classified TS/SCI (effectively the highest conventional level of security classification in the DOD hierarchy). The administration is arguing that these suspects must not be allowed to meet with lawyers since interrogation techniques might be disclosed, and that (by implication) these suspects must never be allowed to tell anyone how they were interrogated. Some suspects who have already been released have made claims of torture at the hands of the U.S. The U.S. says that it does not torture, but refuses to disclose the interrogation techniques that are in use. Vice President Cheney famously agreed recently that a "dunk in the water" was a no-brainer technique, but the White House has subsequently claimed that while this remark did not specifically admit that the U.S. used the torture technique of waterboarding, the exact meaning of Cheney's remark would not be further explained. The logical implication of the administration's assertion of TS/SCI level classification for interrogation techniques would seem to be that they indeed plan to hold remaining suspects indefinitely -- for how else can be ensured the confidentiality of the interrogation techniques so classified? An innocent suspect who was released could not be prevented from talking (after all, they don't hold TS/SCI clearances that could be pulled!) and if the administration is unsatisfied with the current procedure of refusing to address the claims of already released suspects, then the only other option would seem to be to make sure that there aren't any more released suspects. In computer science and crypto work, there is sometimes used a "technique" called "security through obscurity" -- a largely discredited philosophy of trying to ensure security by keeping design elements secret, rather than ensuring secure design principles themselves. For example, a well designed crypto system should not need to include any secret tables that cannot be publicly released and so subjected to broad scrutiny and evaluation of the system's actual strength. A major problem with security through obscurity is that obscurity tends to dissipate with time. For example, in the news right now are reports of concerns over a U.S. government Web site that apparently disclosed some level of nuclear weapons design information. That data has now reportedly been removed from that site, but is presumably already mirrored and archived at other locations around the world. Even more to the point, the real limit (such as it is) on the proliferation of basic nuclear weapons at this stage is not so much design techniques, but rather the availability of fissile materials. Security based on secrets rather than design is always vulnerable to those secrets being disclosed or reverse-engineered. In the case of information that relates directly to potential human rights violations such as the use of torture (or "alternative interrogation techniques" -- seemingly six of one or half-a-dozen of the other) asserting that U.S. security would be compromised by the revelation of those techniques indeed suggests that methods generally classified as torture or other human rights violations may indeed be in use against potentially innocent suspects. If our national security is now truly based on a foundation of keeping interrogation techniques secret -- and everything therefore implied about holding suspects indefinitely and incommunicado -- then we not only have horrendously flawed and vulnerable security, but we have lost our way as a nation as well. --Lauren-- |
Greetings. While I do not yet have enough information to characterize this as a widespread problem, I wanted to issue a heads-up that I'm seeing 100% failure rates on installations of the new Microsoft Internet Explorer 7 on local machines here. MS is pushing IE7 down as a "high priority optional update" in the current Windows Update cycle. On the XP machines where I've attempted this install (varying hardware platforms, all validated and fully updated SP2 systems), the installation process appears to go smoothly, but any attempts to run IE7 afterwards fail. There are two different failure modes. In some cases, attempts to start IE7 simply result in "hourglass" then exit. In other cases, the systems appear to hang in hourglass and require a hardware reset to reboot. I do not yet know the underlying factors causing this situation on these multiple systems. The process to uninstall IE7 (via Add/Remove Programs) has worked quickly and apparently properly in all cases so far to restore the systems to IE6 along with all previous settings. This is pretty much all that I have so far, and I would welcome reports from anyone else seeing similar problems with IE7. I thought that it was worth putting out this warning now, since many persons do depend on proper operation of IE (my browser of choice is Mozilla Firefox, but there are some applications that will only work properly under Internet Explorer, unfortunately). I'll provide more info as available and appropriate. Thanks. --Lauren-- |