The Controversial CLOUD Act: Privacy Plus or Minus?

Over the last few days you may have seen a bunch of articles about the “CLOUD Act” — recently introduced U.S. bipartisan legislation that would overhaul key aspects of how foreign government requests for the data of foreign persons held on the servers of U.S. companies would be handled.

I’m being frequently asked for my position on this, and frankly the analysis has not been a simple one.

Opponents, including EFF, the ACLU, and a variety of other privacy and civil right groups, are opposing the legislation, arguing that it eases access to such data by foreign governments and represents a dangerous erosion of privacy rights.

Proponents, including Apple, Facebook, Google, Microsoft, and Oath (Yahoo/Verizon) argue that the CLOUD Act provides much needed clarity to the technically and legally confused mess regarding transborder data requests, and introduces new privacy and transparency protections of its own.

One thing is for sure — the current situation IS a mess and completely unsustainable going forward, with ever escalating complicated legal entanglements (e.g. the ongoing Microsoft Ireland case, with a pending Supreme Court decision likely to go against Microsoft’s attempts at promoting transborder privacy) and ever more related headaches in the future.

Cutting to the chase, I view the CLOUD Act as flawed and imperfect, but still on balance a useful effort at this time to move the ball forward in an exceedingly volatile global environment.

This is particularly true given my concerns about foreign governments’ increasing demands for “data localization” — where their citizens’ data would be stored under conditions that would frequently be subject to far fewer privacy protections than would be available under either current U.S. law or the clarified provisions of the CLOUD Act. In the absence of the CLOUD Act, such demands are certain to rapidly accelerate.

One of the more salient discussions of the CLOUD Act that I’ve seen lately is: “Why the CLOUD Act is Good for Privacy and Human Rights” ( Regardless of how you feel about these issues, the article is well worth reading.

Let’s face it — nothing about the Net is simple.


Why YouTube’s New Plan to Debunk Conspiracy Videos Won’t Work

YouTube continues to try figure out ways to battle false conspiracy videos that rank highly on YouTube — sometimes even into the top trending lists — and that can spread to ever more viewers via YouTube’s own “recommended videos” system. I’ve offered a number of suggestions for dealing with these issues, most recently in “Solving YouTube’s Abusive Content Problems — via Crowdsourcing” (

YouTube has now announced a new initiative that they’re calling “information cues” — which they hope will address some of these problems.

Unfortunately, this particular effort (at least as being reported today) is likely doomed to be almost entirely ineffective.

The idea of “information cues” is to provide false conspiracy YouTube videos with links to Wikipedia pages that “debunk” those conspiracies. So, for example, a video claiming that the Florida student shooting victims were actually “crisis actors” would presumably show a link to a Wikipedia page that explains why this wasn’t actually the case.

You probably already see the problems with this approach.

We’ll start with the obvious elephant in the room. The kind of viewers who are going to believe these kinds of false conspiracy videos are almost certainly going to say that the associated Wikipedia articles are wrong, that they’re planted lies. FAKE NEWS!

Do we really believe that anyone who would consider giving such videos even an inch of credibility is going to be convinced otherwise by Wikipedia pages? C’mon! If anything, such Wikipedia pages may actually serve to enforce these viewers’ beliefs in the original false conspiracy videos!

Not helping matters at all is that Wikipedia’s reputation for accuracy — never all that good — has been plunging in recent years, sometimes resulting in embarrassing Knowledge Panel errors for Google in search results.

Any Wikipedia page that is not “protected” — that is, where the ordinary change process has been locked out — is subject to endlessly mutating content editing wars — and you can bet that any editable Wikipedia pages linked by YouTube from false conspiracy videos would become immediate high visibility targets for such attacks.

If there’s one thing that research into this area has already shown quite conclusively, it’s that the people who believe these kinds of garbage conspiracy theories are almost entirely unconvinced by any factual information that conflicts with their inherent points of view.

The key to avoiding the contamination caused by these vile, lying, false conspiracy videos is to minimize their visibility in the YouTube/Google ecosystem in the first place.

Not only should they be prevented from ever getting into the trending lists, they should be deranked, demonetized, and excised from the YouTube recommended video system. They should be immediately removed from YouTube entirely if they contain specific attacks against individuals or other violations of the YouTube Terms of Service and/or Community Guidelines. These actions must be taken as rapidly as possible with appropriate due diligence, before these videos are able to do even more damage to innocent parties.

Nothing less can keep such disgusting poison from spreading.


Solving YouTube’s Abusive Content Problems — via Crowdsourcing

We all know that the long knives are out by various governments regarding YouTube content. We know that Google is significantly increasing the number of workers who will review YT abuse reports.

But we also know that the volume of videos in the uploading firehose is going to continue leaving very large numbers of abusive videos online that may quickly achieve high numbers of views, even if YT employed techniques that I’ve previously urged, such as human review of videos that are about to go onto the trending lists before they actually do so.

This scale of videos is enormous — yet the scale of viewing users is also very large.

Is there some way to leverage the latter to help deal with abusive content in the former, as a proactive effort to help keep government censorship of YT at bay?

YT already has a “Trusted Flaggers” program that gives abuse review priority to videos that these users have flagged. But (as far as I know) this only applies to videos that these users have happened to find and see of their own volition. 

I don’t have the hard data to prove this, but I have a strong suspicion that vast numbers of users would be willing to participate as organized volunteer proactive “screeners” of a sort for YT, especially if there was some even minor financial incentive for their participation (think in terms of a small amount of Play Store credit, for example).

What if public videos that were suddenly attracting significant numbers of views (“significant” yet to be defined), were pushed to some odd number (to avoid ties) of such volunteer viewers who have undergone appropriate online training regarding YT’s Terms of Use? We require that they actually are viewing reasonable amounts of these videos (yes, there would be ways to attempt gaming this, but remember we’re talking about very large numbers of volunteers so much of that risk should wash out if care is used in tracking analysis).

They vote/rate the videos acceptable or not. If the majority vote a video as unacceptable, it gets pushed to the formal G abuse screeners for a decision. If any given volunteer is found over time to be providing bad decisions, they’re dropped from the program.

Most videos would have small enough numbers of views to never even enter this system. But it would provide a middle ground to help deal with videos that are suddenly getting more visibility *before* they can cause big problems, and this technique doesn’t rely on random viewers taking the initiative to flag abusive videos (and for that matter figuring out how to flag them, since flagging is not typically a top level YT user interface element these days, as I’ve previously noted).

Since participants in this program would not have any control over which specific videos they’d be pushed for a vote, and since again we’d be talking about quite large numbers of participants (and we’d be monitoring their performance over time), the ability to purposely claim that nonabusive videos were abusive (or the reverse) would be minimized.

No video would have action taken against it unless it had also been declared abusive by a regular YT screener in the pipeline after the volunteer screeners down-voted a video — providing even more protection.

How to define abusive videos is of course a separate discussion relating directly to the YT Terms of Service, but this could include the kinds of content violations that we all know about in relation to YT (hate speech, dangerous pranks and dares, threats, etc.), and even areas such as obvious obnoxious Content ID evasions (e.g., program/movie video inset boxes against random backgrounds, artificial program run time variations, and so on).

I do realize that this is a fairly radical concept and that there are all manner of details that aren’t considered in this brief summary. But I am increasingly convinced that it’s going to take some sort of new approach to help deal with these problems proactively, and to help forestall governments from moving in and wrecking the wonderful YouTube ecosystem with escalating politically motivated demands and threats.


The Ethics of Google and the Pentagon Drones

Many years ago, I was the systems guy who ran the early UNIX minicomputers in the basement of Santa Monica’s RAND Corporation. While RAND at the time derived the vast majority of its income from Department of Defense contracts, I was there despite my lifelong refusal to work directly on military-related projects (to the significant detriment of my own income, I might add). RAND spoke truth to power. DoD could contract with RAND for a report on some given topic, but RAND wouldn’t skew a report to reach results that the contractor had hoped for. I admired that.

One midday I was eating lunch in an open patio between the offices there, chatting with a couple of the military research guys. At the time, one focus of DoD interest was use of mainframe and minicomputer systems to analyze battlefield data, such as it was back then. My lunchmates assured me that their work was all defensive in nature.

I asked how they could be sure that the same analytical systems they intended for defense couldn’t also by used by the military for actually killing people. “We have to trust them,” came the reply. “The technology is inherently dual use.”

It seemed to me that battlefield data analysis was fundamentally different from the DoD-funded projects I also worked on — with ARPANET being the obvious example. Foundational communications research is not in the same category as calculating how to more efficiently kill your enemy. At least that’s how I felt at the time, and I still feel that way. There’s nothing inherently evil in accepting money from DoD — the ethical issues revolve around the specifics of the projects involved.

Fast forward to the controversy that has arisen today, about which I’ve been flooded with queries — word that Google has been engaged in “Project Maven” for DoD, using Google AI/Machine Learning tech to analyze footage from military drones. Apparently this wasn’t widely known even internally at Google, until the topic recently found its way to internal discussion groups and then leaked to the public. Needless to say, there reportedly has been quite considerable internal controversy about this, to say the least.

“How do you feel about this, Lauren?” I’m being asked.

Since I frequently play armchair ethicist, I’ve been giving this question a lot of thought today.

The parallels with that lunch discussion at RAND so long ago seem striking.  The military wanted to analyze battlefield data back then, and they want to analyze military drone data now.

There are no simple answers.

But we can perhaps begin with the problem of innocent civilian deaths resulting from U.S. drone strikes. We know that the designated terrorist targets are frequently purposely embedded in civilian areas, and often travel with civilians who have little or no choice in the matter — such as children and other family members.

While the Pentagon (as they did during the Vietnam war) makes a grand show about body counts, it’s not clear that most of these drone strikes have much long-term anti-terrorism impact. The targets are frequently fungible — kill one leader and another moves right in. Liquidate one bomb maker and the position is quickly filled by another.

So, ethical question #1: Are these drone strikes justifiable at all? To answer this question honestly, we must of course consider the rate of collateral civilian deaths and injuries, which are sure to inspire further anti-U.S. rhetoric and attacks.

My personal belief is that in most cases — at least to the extent that we in the public are aware — the answer to this question is generally no.

Which brings us to ethical question #2 (or rather, a set of questions): Does supplying advanced image processing and analysis systems for military drone data fall into an ethically acceptable category, provided that such analysis is not specifically oriented toward targeting for lethal operations? Can it be reasonably argued that more precise targeting could also help to prevent civilian casualties, even when those civilians are in immediate proximity to the intended targets? Or is providing such facilities also ethical even if direct lethal operations are known in advance to be the likely result, toward the advancement of currently stated U.S. interests?

And after all, much of our technology today can be easily repurposed in ways that we technologists had not intended — for example, for oppressive governments to surveil and censor their own citizens.

Yet the immense potential power of rapidly advancing AI and Machine Learning systems do cast these kinds of issues in a new and qualitatively different kind of light. And that’s even if we leave aside a business-based analysis that some firms might make, noting that if they don’t provide the services, some other company will do so anyway, and get the contracts as well as the income.

I know absolutely nothing about Google’s participation in Project Maven other than what I’ve seen in public sources today.

But to try address the gist of my own questions from just above, based on what I know right now, I believe that Google has a significant ethical quandary on their hands in this regard.

I personally doubt that this kind of powerful tech can be constrained through contractual relationships to purely defensive use. I also feel that the decision regarding whether or not any given firm is willing to accept that its technology may be used for lethal purposes is one that should be made “eyes wide open” —  and is worthy of nothing less than effectively a significant level of company-wide consensus before proceeding.

It has been ages since I even thought about that long ago lunch conversation at RAND. It’s indeed disquieting to be thinking about it again today.

Be seeing you.


Why I Finally Dumped Netflix (and Love FilmStruck/Criterion)

Yesterday was my last day subscribing to Netflix. Miss them, I will not. I had been meaning to kill the subscription for quite some time, finally pulled the trigger a couple of weeks ago, and the final days ran out at the end of February.

It’s been painful to watch Netflix’s escalating deterioration and hubris. After arguably putting movie rental stores out of business almost single-handedly, Netflix decided that they no longer really cared about classic films.

Netflix CEO Reed Hastings wants to play Hollywood movie mogul for himself. So Netflix has been decimating its online catalog of classic, quality films, and replacing them with a cavalcade of mediocre productions. Their corpus of classic television has been going in the same direction for ages now.

What’s more, Netflix is spending billions of dollars — reportedly $8 billion just this year — to produce its own stream of mostly unwatchable films and series — which they continuously promote through app screensavers and in every other way possible.

It’s gotten to the point that whenever you hear the characteristic loud “thum thum!” that precedes a Netflix production, you know it’s time to move on.

That’s not to say that Netflix doesn’t occasionally produce a quality film or show — but the ratio is awful, and seems to be mostly of the “stopped clock is correct twice a day” variety.

Their “You might like this, Lauren!” recommendations stink. You can dig through their online listings for ages and find nothing even remotely worth your time.

Bye bye Netflix.

Luckily for those of us who care about classic films and quality films in general, there’s a superb online alternative —FilmStruck/Criterion:

FilmStruck is a service of Turner Broadcasting, who also produce the always excellent Turner Classic Movies (TCM) channel, of which I’ve been a fan since its inception many years ago. 

I subscribed to FilmStruck (and their wonderful Criterion Collection add-on) some weeks ago, around the same time that I issued my Netflix cancellation (Netflix vis-a-vis FilmStruck/Criterion pricing are both very similar, by the way). 

One of the best entertainment-related decisions I’ve ever made.

It would be fair to call F/C something of a TCM on super-steroids (and in fact, F/C has just now begun to integrate some new F/C intros from TCM hosts, and classic materials from the TCM archives — super!)

Are there downsides? Well, in all honesty F/C’s website is pretty slow and clunky. Their device apps need significant work. While you can run three simultaneous video streams, there’s no mechanism for separate users per se. 

I don’t care. All of that logistical stuff will certainly improve with time. 

Once the video streams are running they look great. Films are in HD whenever possible and are in reasonable aspect ratios. There are no “ID bugs” on the screen during films (and here I’ll also note that TCM has always had a policy of keeping their ID bugs to an absolute minimum — just a few seconds at a time occasionally during films, which is also very much appreciated).

The depth and breadth of F/C’s superb classic and independent films online catalog are breathtaking.

But there’s a lot more there than the individual movies. There are curated collections of films. Often there are all manner of “extras” — not only the kinds of additional materials familiar from DVDs like commentary tracks, discussions, and other original features, but F/C-produced materials as well.

It really is a classic film lover’s paradise.

What’s more, a few days ago it was announced that Warner Bros. was shutting down their own standalone streaming service, and transferring their vast library of hundreds of classic films to F/C — some of those have already become available and they’re great. I started into them yesterday with “Forbidden Planet” and “Casablanca” — and that’s just barely scratching the surface, of course.

Anyway, you get the idea. If you’re happy with the kind of putrid porridge that has become Netflix’s stock-in-trade these days, more power to you — enjoy.

But if you care about great films, about classic films — I urge you to give FilmStruck/Criterion a try (there’s a 14 day free trial, and you can view via a range of mobile and streaming devices, including Chromecast, Roku, etc.)

Sorry Netflix. That’s show biz!