August 13, 2014

In UK, Experimenting With Heart Attack Victims Without Consent

Direct from the UK comes word of one of the more dubious medical experiments I've heard of in some time, that should raise ethical red flags around the world.

If you live in the Welsh, West Midlands, North East, South Central and London Ambulance Service areas, and you take no action to opt-out from a planned new University of Warwick study -- and you're unfortunate enough to have a heart attack -- you may randomly find yourself treated with a placebo rather than the conventional treatment of adrenaline. If you die from your heart attack, researchers will not actively seek out your relatives to inform them of how you were treated.

Persons who happen to see advertisements about the study in those areas and so learn of its existence can in theory opt-out --otherwise, you're a lab rat whether you want to be or not.

Researchers have a legitimate question -- does adrenaline therapy in these situations do more harm than good? Unfortunately, in their attempt to avoid study bias, they have violated a basic informed consent principle of ethical experimentation.

I suspect that this study stands a good chance of collapsing in the light of publicity, and the litigation potential appears enormous even for the UK. If nothing else, I would expect to see campaigns urging UK residents in the affected areas to opt-out en masse.

I would opt-out if I lived there.

Sometimes ostensibly "good science" is unacceptably bad ethics.

--Lauren--
I am a consultant to Google -- I speak only for myself, not for them.

Posted by Lauren at 11:19 AM | Permalink


July 29, 2014

When Web Experiments Violate User Trust, We're All Victims

If you ever wonder why it seems like politicians around the world appear to have decided that their political futures are best served by imposing all manner of free speech restrictions, censorship, and content controls on Web services, one might be well served by examining the extent to which Internet users feel that they've been mistreated and lied to by some services -- how their trust in those services has been undermined by abusive experiments that would not likely be tolerated in other aspects of our lives.

To be sure, all experiments are definitely not created equal. Most Web service providers run experiments of one sort or another, and the vast majority are both justifiable and harmless. Showing some customers a different version of a user interface, for example, does not risk real harm to users, and the same could be said for most experiments that are aimed at improving site performance and results.

But when sites outright lie to you about things you care about, and that you have expected those sites to provide to you honestly, that's a wholly different story, indeed -- and that applies whether or not you're paying fees for the services involved, and whether or not users are ever informed later about these shenanigans. Nor do "research use of data" clauses buried in voluminous Terms of Service text constitute informed consent or some sort of ethical exception.

You'll likely recall the recent furor over revelations about Facebook experiments -- in conjunction with outside experimenters -- that artificially distorted the feed streams of selected users in an effort to impact their emotions, e.g., show them more negative items than normal, and see if they'll become depressed.

When belated news of this experiment became known, there was widespread and much deserved criticism. Facebook and experimenters issued some half-hearted "sort of" apologies, mostly suggesting that anyone who was concerned just "didn't understand" the point of the experiment. You know the philosophy: "Users are just stupid losers!" ...

Now comes word that online dating site OkCupid has been engaging in its own campaign of lying to users in the guise of experiments.

In OkCupid's case, this revelation comes not in the form of an apology at all, but rather in a snarky, fetid posting by one of their principals, which also includes a pitch urging readers to purchase the author's book.

OkCupid apparently performed a range of experiments on users -- some of the harmless variety. But one in particular fell squarely into the Big Lie septic tank, involving lying to selected users by claiming that very low compatibility scores were actually extremely high scores. Then OkCupid sat back and gleefully watched the fun like teenagers peering through a keyhole into a bedroom.

Now of course, OkCupid had their "data based" excuse for this. By their claimed reckoning, their algorithm was basically so inept in the first place that the only way their could calibrate it was by providing some users enormously inflated results to see how they'd behave, then studying this data against control groups who got honest results from the algorithm.

Sorry boy wonders, but that story would get you kicked out of Ethics 101 with a tattoo on your forehead that reads "Never let me near a computer again, please!"

Really, this is pretty simple stuff. It doesn't take a course in comparative ethics to figure out when an experiment is harmless and when it's abusive.

Many apologists for these abusive antics are well practiced in the art of conflation -- that is, trying to confuse the issue by making invalid comparisons.

So, you'll get the "everybody does experiments" line -- which is true enough, but as noted above, the vast majority of experiments are harmless and do not involve lying to your users.

Or we'll hear "this is the same things advertisers try to do -- they're always playing with our emotions." Certainly advertisers do their utmost to influence us, but there's a big difference from the cases under discussion here. We don't usually have a pre-existing trust relationship with those advertisers of the sort we have with Web services that we use every day, and that we expect to provide us with honest results, honest answers, and honest data to the best of their ability.

And naturally there's also the refrain that "these are very small differences that are often hard to even measure, and aren't important anyway, so what's the big deal?"

But from an ethical standpoint the magnitude of effects is essentially irrelevant. The issue is your willingness to lie to your users and purposely distort data in the first place -- when your users expect you to provide the most accurate data that you can.

The saddest part though is how this all poisons the well of trust generally, and causes users to wonder when they're next being lied to or manipulated by purposely skewed or altered data.

Loss of trust in this way can have lethal consequences. Already, we've seen how a relatively small number of research ethical lapses in the medical community have triggered knee-jerk legislative efforts to restrict legitimate research access to genetic and disease data -- laws that could cost many lives as critical research is stalled and otherwise stymied. And underlying this (much as in the case of anti-Internet legislation we noted earlier) is politicians' willingness to play up to people's fears and confusion -- and their loss of trust -- in ways that ultimately may be very damaging to society at large.

Trust is a fundamental aspect of our lives, both on the Net and off. Once lost, it can be impossible to ever restore to former levels. The damage is often permanent, and can ultimately be many orders of magnitude more devastating than the events that may initially trigger a user trust crisis itself.

Perhaps something to remember, the next time you're considering lying to your users in the name of experimentation.

Trust me on this one.

--Lauren--
I am a consultant to Google -- I speak only for myself, not for them.

Posted by Lauren at 01:04 PM | Permalink


May 30, 2014

EU's "Right to Have The Streisand Effect" Goes Live

Since I've at various times over the years expressed both my concerns and disgust for the "right to be forgotten" concept, e.g. "The "Right to Be Forgotten": A Threat We Dare Not Forget, I'm not going to rehash that discussion here and now. But a look at the ironic situation the EU censorship bureaucrats have created for themselves today, via the recent EU court ruling on this matter, is both amusing and instructive.

Google now has an "application" form up for EU residents who want to apply for search results removal. Using this form definitely does not guarantee that results will be removed, particularly if there is any public interest in those results.

But here's the best part. Results will only be removed for the EU country localized versions of Google. They will *not* (naturally, since thankfully the EU doesn't rule the world!) be removed from the main google.com site itself.

Additionally, when results are removed from EU versions, the associated results pages will reportedly contain a notice to EU users that results were deleted (similar to the way copyright takedowns are handled now), and "Chilling Effects"-type reports will also reportedly be made.

The implications of this gladden my "right to be forgotten" hating heart. If you're an EU user searching for Joe Blow, and the EU has forced removal of a search result related to him on, say, google.fr, the warning notice informing you that results have been removed for that search give you an immediate cue that you might want to head over to google.com to see what the EU censorship bureaucrats deemed unfit for your eyes. In essence, it's a built in Streisand Effect, courtesy of the EU itself! Before this, you might not even have noticed the result in question among other results for that search .

Not only that, but other search queries that happen to include the pages that were blocked for EU searches on that name will still apparently appear, even in the EU.

And of course, curious EU searchers who want to escape the local EU censorship regimes have various ways to reach the main google.com, as do other users in censoring countries around the world: google.com homepage access links, use of google.com/ncr (No Country Redirect), or in more extreme cases proxies and VPNs.

Censorship in the Internet age is a hopeless endeavor, as the EU is about to discover.

Get your popcorn ready.

Be seeing you.

--Lauren--
(I'm a consultant to Google. I'm speaking for myself, not for them.)

Posted by Lauren at 10:12 AM | Permalink


February 22, 2014

No, I Don't Trust You! -- One of the Most Alarming Internet Proposals I've Ever Seen

If you care about Internet security, especially what we call "end-to-end" security free from easy snooping by ISPs, carriers, or other intermediaries, heads up! You'll want to pay attention to this.

You'd think that with so many concerns these days about whether the likes of AT&T, Verizon, and other telecom companies can be trusted not to turn our data over to third parties whom we haven't authorized, that a plan to formalize a mechanism for ISP and other "man-in-the-middle" snooping would be laughed off the Net.

But apparently the authors of IETF (Internet Engineering Task Force) Internet-Draft "Explicit Trusted Proxy in HTTP/2.0" (14 Feb 2014) haven't gotten the message.

What they propose for the new HTTP/2.0 protocol is nothing short of officially sanctioned snooping.

Of course, they don't phrase it exactly that way.

You see, one of the "problems" with SSL/TLS connections (e.g. https:) -- from the standpoint of the dominant carriers anyway -- is that the connections are, well, fairly secure from snooping in transit (assuming your implementation is correct ... right?)

But some carriers would really like to be able to see that data in the clear -- unencrypted. This would allow them to do fancy caching (essentially, saving copies of data at intermediate points) and introduce other "efficiencies" that they can't do when your data is encrypted from your client to the desired servers (or from servers to client).

When data is unencrypted, "proxy servers" are a routine mechanism for caching and passing on such data. But conventional proxy servers won't work with data that has been encrypted end-to-end, say with SSL.

So this dandy proposal offers a dandy solution: "Trusted proxies" -- or, to be more straightforward in the terminology, "man-in-the-middle attack" proxies. Oh what fun.

The technical details get very complicated very quickly, but what it all amounts to is simple enough. The proposal expects Internet users to provide "informed consent" that they "trust" intermediate sites (e.g. Verizon, AT&T, etc.) to decode their encrypted data, process it in some manner for "presumably" innocent purposes, re-encrypt it, then pass the re-encrypted data along to its original destination.

Chomping at the bit to sign up for this baby? No? Good for you!

Ironically, in the early days of cell phone data, when full capability mobile browsers weren't yet available, it was common practice to "proxy" so-called "secure" connections in this manner. A great deal of effort went into closing this security hole by enabling true end-to-end mobile crypto.

Now it appears to be full steam ahead back to even worse bad old days!

Of course, the authors of this proposal are not oblivious to the fact that there might be a bit of resistance to this "Trust us" concept. So, for example, the proposal includes the assumption of mechanisms for users to opt-in or opt-out of these "trusted proxy" schemes.

But it's easy to be extremely dubious about what this would mean in the real world. Can we really be assured that a carrier going through all the trouble of setting up these proxies would always be willing to serve users who refuse to agree to the proxies being used, and allow those users to completely bypass the proxies? Count me as skeptical.

And the assumption that users can even be expected to make truly informed decisions about this seems highly problematic from the git-go. We might be forgiven for suspecting that the carriers are banking on the vast majority of users simply accepting the "Trust us -- we're your friendly man-in-the-middle" default, and not even thinking about the reality that their data is being decrypted in transit by third parties.

In fact, the fallacies deeply entrenched in this proposal are encapsulated within a paragraph tucked in near the draft's end:

"Users should be made aware that, different than end-to-end HTTPS, the achievable security level is now also dependent on the security features/capabilities of the proxy as to what cipher suites it supports, which root CA certificates it trusts, how it checks certificate revocation status, etc. Users should also be made aware that the proxy has visibility to the actual content they exchange with Web servers, including personal and sensitive information."

Who are they kidding? It's been a long enough slog just to get to the point where significant numbers of users check for basic SSL status before conducting sensitive transactions. Now they're supposed to become security/certificate experts as well?

Insanity.

I'm sorry gang, no matter how much lipstick you smear on this particular pig -- it's still a pig.

The concept of "trusted proxies" as proposed is inherently untrustworthy, especially in this post-Snowden era.

And that's a fact that you really can trust.

--Lauren--
I'm a consultant to Google. My postings are speaking only for myself, not for them.

- - -

Addendum (24 February 2014): Since the posting of the text above, I've seen some commentary (in at least one case seemingly "angry" commentary!) suggesting that I was claiming the ability of ISPs to "crack" the security of existing SSL connections for the "Trusted Proxies" under discussion. That was not my assertion.

I didn't try to get into technical details, but obviously we're assuming that your typical ISP doesn't have the will or ability to interfere in such a manner with properly implemented traditional SSL. That's still a significant task even for the powerful intelligence agencies around the world (we believe at the moment, anyway).

But what the proposal does push is the concept of a kind of half-baked "fake" security that would be to the benefit of dominant ISPs and carriers but not to most users -- and there's nothing more dangerous in this context than thinking you're end-to-end secure when you're really not.

In essence it's a kind of sucker bait. Average users could easily believe they were "kinda sorta" doing traditional SSL but they really wouldn't be, 'cause the ISP would have access to their unencrypted data in the clear. And as the proposal itself suggests, it would take significant knowledge for users to understand the ramifications of this -- and most users won't have that knowledge.

It's a confusing and confounding concept -- and an unwise proposal -- that would be nothing but trouble for the Internet community and should be rejected.

- - -

Posted by Lauren at 08:24 PM | Permalink



     Privacy Policy