EU to Domain Owners in the UK: Drop Dead!


If there were ever any remaining questions about the cruel pettiness of European Union bureaucrats and politicians — as if their use of extortionist tactics against firms like Google, and the implementation of horrific global censorship regimes like “Right To Be Forgotten” weren’t enough — the latest chapter in EU infamy should eliminate any lingering doubts.

The European Commission has now issued an edict that the over 300 thousand UK-based businesses and other UK owners of dot-EU (.eu) domain names will be kicked off of their domains — and in many cases have their websites and businesses wrecked as a result — due to Brexit.

One might readily acknowledge that the UK’s pursuit of Brexit was a historically daft and self-destructive idea, but it took the EU to treat UK businesses caught in the middle as if they were victims from one of the torture-porn “SAW” movies. The more blood and pain the merrier, right gents?

The EU pronouncement is loaded with legalistic mumbo-jumbo, but is being widely interpreted as not only saying that UK entities can’t register or even renew existing dot-EU domains after about a year from now, but that perhaps even existing registrations might be terminated as of that date as well — apparently with no right of appeal.

There’s talk that there might be a small chance of negotiations to avert some of this. But the mere fact that the EC would issue such a statement — completely at odds with the way that domain transition issues have been routinely handled on the Internet for decades — gives us vast insight into the cosmic train wreck represented by increased European Union influence over Internet policies and operations.

Just when you begin to think that the EU can’t come up with an even worse way of wrecking the Net, they fool us once again with ever more awful new lows.

Congratulations!

–Lauren–

Why Big Tech Needs Big Ethics — Right Now!

The Cambridge Analytica user trust debacle currently enveloping Facebook has once again brought into sharp focus a foundational issue that permeates Big Tech — the complex interrelationships between engineering, marketing, and ethics.

I’ve spent many years pounding on this problem, often to be told by my technologist colleagues that “Our job is just to build the stuff — let the politicians figure out the ethics!”

That attitude has always chilled me to the bone — let the *politicians* handle the ethics relating to complicated technologies? (Or anything else for that matter?) Excuse me, are we living on the same planet? On the same timeline? Hello???

So I almost choked on my coffee when I saw articles saying that Facebook was now suggesting the need for government regulation of their operations – aka – “Stop us before we screw our users yet again!”

The last thing we need is the politicians involved. They by and large don’t understand what we’re doing, they generally operate on the basis of image and political expediency. Politicians touching tech is typically poison.

But the status quo of Big Tech is untenable also. Google is a wonderful firm with great ideals, but with continuing user support and accessibility problems. Facebook strikes me, frankly, as having a basically evil business model. Apple is handing user data and crypto keys over to the censoring Chinese dictatorship. Microsoft, and the rest — who the hell knows from day to day?

One aspect that they’ve all shared is the “move fast and break things” mantra of Silicon Valley, and a tendency to operate on the basis that “you never want to ask permission, just apologize later if things go wrong.”

These attitudes just aren’t going to work going forward. These firms (and their users!) are now in the crosshairs of the politicians, who see rigorous regulation of these firms as key to their political futures, and they intend to accomplish this by making Big Tech “the fall guy” for a range of perceived evils — smoothing the ways for various forms of micromanaged, government-imposed information control and censorship.

As we’ve already seen in Russia, China, and even increasingly in Europe, this is indeed the path to tyranny. Assuming that the USA is invulnerable to these forces would be stupidity to the max.

For too long, user support and ethical questions have had second-class status at most tech firms. It’s not that these concerns don’t exist at all, it’s that they’re often very low in the product priority hierarchies.

This must change.

Ethics, user trust, and user support issues must proactively rise to the top of these hierarchies, lest opportunistic politicians leverage the existing situation for the imposition of knee-jerk “solutions” that will not only seriously damage these firms, but will ultimately be devastating to their users and broader communities as well.

There have long existed corporate roles in various “traditional” industries — who long ago learned how to avoid being easily steamrolled by the politicians — to help avoid these dilemmas.

Full-time ethicists and ombudsmen, for example, can play crucial roles in these respects, by helping firms to understand the cross-product, cross-team implications of their projects in relation to internal needs, user requirements, and overall effects on the world at large.

Many Internet-related firms have resisted the idea of accepting these roles within their corporate ranks, believing that their other management and public relations employees can fulfill those functions.

But in reality — and the continuing Facebook privacy disasters are but one set of examples — it takes a specific kind of longitudinal, cross-team approach to seriously, adequately, and successfully address these escalating issues.

Another argument heard against ombudsman and ethicist roles is concerns regarding their supposedly having “veto” power over product decisions. This is a fallacious argument. These roles need not necessarily imply any sort of launch or other veto abilities, and can be purely advisory in terms of internal policy decisions. But having the input of persons with these skill sets in the ongoing decision-making process is still crucial — and lacking at many of these major firms.

The time is short for firms to grasp the nettle in these regards. Politicians around the world — not just in traditional tyrannies — are taking advantage of the publicly perceived ethical and user support problems at these firms.

All through human history, governments have naturally gravitated toward controlling the information available to citizens — sometimes with laudable motives, always with horrific results.

Internet technologies provide governments with a veritable and irresistible “candy store” of possibilities for government-imposed censorship and other information control.

A key step that these firms must take to help stave off such dark outcomes is to move immediately to make Big Ethics a key part of their corporate DNA.

To do otherwise, or even to hesitate toward making such changes, could easily be tantamount to total surrender.

–Lauren–

Seriously, It’s Time to Ditch Facebook and Give Google+ a Try

One might think that with the deluge of news about how Facebook has been manipulating you and violating your privacy — and neglecting to tell you about it — Google would be taking this opportunity to point out that their own Google+ social system is very much the UnFacebook.

But sometimes Google is reticent about tooting their own horn. So what the hell, when it comes to Google+, I’m going to toot it for them.

Frankly, I’ve never trusted Facebook, and current events seem to validate those concerns yet again. Facebook is fundamentally designed to exploit users in particularly devious and disturbing ways (please see: “Fixing Facebook May Be Impossible” – https://lauren.vortex.com/2018/03/18/fixing-facebook-may-be-impossible).

Yet I’ve been quite happily communicating virtually every day with all manner of fascinating people about a vast range of topics over on Google+ (https://plus.google.com/+LaurenWeinstein), since the first day of beta availability back in 2011.

The differences between Facebook and Google+ are numerous and significant. There are no ads on Google+. Nobody can buy their way into your feed or pay Google for priority. Google doesn’t micromanage what you see. Google doesn’t sell your personal information to any third parties.

There’s overall a very different kind of sensibility on G+. There’s much less of people blabbing about the minutiae of their own lives all day long (well, perhaps except when it comes to cats — I plead guilty!), and much more discussion of issues and topics that really matter to more people. There’s much less of an emphasis on hanging around with those high school nitwits whom you despised anyway, and much more a focus on meeting new persons from around the world for intelligent discussions.

Are there any wackos or trolls on G+? Yep, they’re out there, but they never represent more than a small fraction of total interactions, and the tools are available to banish them in short order. 

There is much more of a sense of community among G+ users, without the “I hate it but I use it anyway” feeling so often expressed by Facebook users. Facebook posts all too often seem to be about “me” — G+ posts more typically are about “us” — and tend to be far more interesting as a result.

At this juncture, the Google-haters will probably start to chime in with their usual bizarre conspiracy theories. Other than suggesting that they remove their tinfoil hats so that their scalps can breathe, I can’t do much for them.

Does Google screw up from time to time? Yes. But so does Facebook, and in far, far more egregious ways. Google messes up occasionally and works to correct what went wrong. Unfortunately, not only does Facebook make mistakes, but the entire philosophy of Facebook is dead wrong — a massive, manipulative violation of users’ personal information and communications on a gargantuan scale. There simply is no comparison.

And I’ll note here what should be obvious — I wouldn’t use G+ (or other Google services) if I weren’t satisfied with the ways that they handle my data. Having consulted to Google, I have a pretty decent understanding of how this works, and I know many members of their world-class privacy team personally. If only most firms gave their customers the kinds of control over their data that Google does (“The Google Page That Google Haters Don’t Want You to Know About” – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about).

But whether or not you decide to try Google+, please don’t keep playing along with Facebook’s sick ecosystem. Facebook has been treating its users like suckers since day one, and there’s damned little to suggest that they’re moving in other than an increasingly awful trajectory. 

And that’s the truth.

–Lauren–

Fixing Facebook May Be Impossible

In the realm of really long odds, let’s imagine that Facebook CEO Mark Zuckerberg contacted me with this request: “Lauren, we’re in big trouble over here. I’ll do anything that you suggest to get Facebook back on the road of righteousness! Just name it and it’ll be done!”

Beyond the fact that this scenario is even less likely than Donald Trump voluntarily releasing his tax returns (though perhaps not by much!), I’m unsure that I’d have any practical ideas to help out Zuck.

The foundational problem is that any solutions with any significant chance of success would mean fundamentally changing the Facebook ecosystem in ways that would probably make it almost unrecognizable compared with their existing status quo.

Facebook is founded and structured almost entirely on the concept of straitjacketing users into narrow “walled gardens” of information, tailoring on an individual basis what they see in the most manipulative ways possible.

Perhaps even worse, Facebook permits posts to be “promoted” — that is, being visible in users’ feeds when they might not otherwise have appeared in those feeds — if you pay Facebook enough money.

Contrasting these fundamentals with Google’s social media operations is instructive.

For example, while you can buy ads to appear in conjunction with search results on Google (but never mixed in with the organic results themselves), there are no ads on Google+, nor is there any way to pay Google to promote Google+ posts.

Google’s major focus — their 20th birthday is this year — has always been on making the most information possible available in an organized way — the explicit goal of Google’s founding duo.

On the other hand, Facebook’s focus has always centered on tightly supervising and controlling the information that their victims — oops, sorry — users see. Given that Zuck originally founded Facebook as a means to avoid dating what he considered to be “ugly” women, we shouldn’t be at all surprised.

I’ve never had an active Facebook account (I do have a “stealth” account that I use so that I can survey Facebook pages, user interfaces, and similar aspects of the service that are only available to logged-in users — but I never post anything there.)

Yet I’ve never felt in any way deprived by not being an active Facebook user.

I frequently hear from people who tell me that they really hate Facebook, but that they keep using it because their friends or relatives don’t want to bother communicating with them any other way. That’s just … sad. 

But it’s not a valid excuse in the long run.

Perhaps even more to the point today, Facebook’s operating model makes it enormously vulnerable to ongoing manipulation by Russia and its affiliated entities (such as Donald Trump, his campaign, and his minions) toward undermining western democracies. 

Crucially though, this vulnerability is not the result of an accidental flaw in Facebook’s design. Rather, Facebook’s entire ecosystem is predicated on encouraging the manipulation of its users by third parties who posses the skills and financial resources to leverage Facebook’s model. 

These are not aberrations at Facebook — they are exactly how Facebook was designed to operate. As the saying goes: “Working as intended!”

Yes, I could probably make some useful suggestions to Zuck. Ways to vastly improve their abysmal privacy practices. Reminding them that lying to regulators is always a bad idea. And an array of other positive propositions. 

But the reality is that for Facebook to actually, seriously implement these would entail a wholesale restructuring of what Facebook does and what they currently represent as a firm — and it’s almost impossible to see that voluntarily happening.

So I really just don’t have any good news for Zuck along these lines.

And that’s the truth.

–Lauren–

The Controversial CLOUD Act: Privacy Plus or Minus?


Over the last few days you may have seen a bunch of articles about the “CLOUD Act” — recently introduced U.S. bipartisan legislation that would overhaul key aspects of how foreign government requests for the data of foreign persons held on the servers of U.S. companies would be handled.

I’m being frequently asked for my position on this, and frankly the analysis has not been a simple one.

Opponents, including EFF, the ACLU, and a variety of other privacy and civil right groups, are opposing the legislation, arguing that it eases access to such data by foreign governments and represents a dangerous erosion of privacy rights.

Proponents, including Apple, Facebook, Google, Microsoft, and Oath (Yahoo/Verizon) argue that the CLOUD Act provides much needed clarity to the technically and legally confused mess regarding transborder data requests, and introduces new privacy and transparency protections of its own.

One thing is for sure — the current situation IS a mess and completely unsustainable going forward, with ever escalating complicated legal entanglements (e.g. the ongoing Microsoft Ireland case, with a pending Supreme Court decision likely to go against Microsoft’s attempts at promoting transborder privacy) and ever more related headaches in the future.

Cutting to the chase, I view the CLOUD Act as flawed and imperfect, but still on balance a useful effort at this time to move the ball forward in an exceedingly volatile global environment.

This is particularly true given my concerns about foreign governments’ increasing demands for “data localization” — where their citizens’ data would be stored under conditions that would frequently be subject to far fewer privacy protections than would be available under either current U.S. law or the clarified provisions of the CLOUD Act. In the absence of the CLOUD Act, such demands are certain to rapidly accelerate.

One of the more salient discussions of the CLOUD Act that I’ve seen lately is: “Why the CLOUD Act is Good for Privacy and Human Rights” (https://www.lawfareblog.com/why-cloud-act-good-privacy-and-human-rights). Regardless of how you feel about these issues, the article is well worth reading.

Let’s face it — nothing about the Net is simple.

–Lauren–

Why YouTube’s New Plan to Debunk Conspiracy Videos Won’t Work


YouTube continues to try figure out ways to battle false conspiracy videos that rank highly on YouTube — sometimes even into the top trending lists — and that can spread to ever more viewers via YouTube’s own “recommended videos” system. I’ve offered a number of suggestions for dealing with these issues, most recently in “Solving YouTube’s Abusive Content Problems — via Crowdsourcing” (https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing).

YouTube has now announced a new initiative that they’re calling “information cues” — which they hope will address some of these problems.

Unfortunately, this particular effort (at least as being reported today) is likely doomed to be almost entirely ineffective.

The idea of “information cues” is to provide false conspiracy YouTube videos with links to Wikipedia pages that “debunk” those conspiracies. So, for example, a video claiming that the Florida student shooting victims were actually “crisis actors” would presumably show a link to a Wikipedia page that explains why this wasn’t actually the case.

You probably already see the problems with this approach.

We’ll start with the obvious elephant in the room. The kind of viewers who are going to believe these kinds of false conspiracy videos are almost certainly going to say that the associated Wikipedia articles are wrong, that they’re planted lies. FAKE NEWS!

Do we really believe that anyone who would consider giving such videos even an inch of credibility is going to be convinced otherwise by Wikipedia pages? C’mon! If anything, such Wikipedia pages may actually serve to enforce these viewers’ beliefs in the original false conspiracy videos!

Not helping matters at all is that Wikipedia’s reputation for accuracy — never all that good — has been plunging in recent years, sometimes resulting in embarrassing Knowledge Panel errors for Google in search results.

Any Wikipedia page that is not “protected” — that is, where the ordinary change process has been locked out — is subject to endlessly mutating content editing wars — and you can bet that any editable Wikipedia pages linked by YouTube from false conspiracy videos would become immediate high visibility targets for such attacks.

If there’s one thing that research into this area has already shown quite conclusively, it’s that the people who believe these kinds of garbage conspiracy theories are almost entirely unconvinced by any factual information that conflicts with their inherent points of view.

The key to avoiding the contamination caused by these vile, lying, false conspiracy videos is to minimize their visibility in the YouTube/Google ecosystem in the first place.

Not only should they be prevented from ever getting into the trending lists, they should be deranked, demonetized, and excised from the YouTube recommended video system. They should be immediately removed from YouTube entirely if they contain specific attacks against individuals or other violations of the YouTube Terms of Service and/or Community Guidelines. These actions must be taken as rapidly as possible with appropriate due diligence, before these videos are able to do even more damage to innocent parties.

Nothing less can keep such disgusting poison from spreading.

–Lauren–

Solving YouTube’s Abusive Content Problems — via Crowdsourcing


We all know that the long knives are out by various governments regarding YouTube content. We know that Google is significantly increasing the number of workers who will review YT abuse reports.

But we also know that the volume of videos in the uploading firehose is going to continue leaving very large numbers of abusive videos online that may quickly achieve high numbers of views, even if YT employed techniques that I’ve previously urged, such as human review of videos that are about to go onto the trending lists before they actually do so.

This scale of videos is enormous — yet the scale of viewing users is also very large.

Is there some way to leverage the latter to help deal with abusive content in the former, as a proactive effort to help keep government censorship of YT at bay?

YT already has a “Trusted Flaggers” program that gives abuse review priority to videos that these users have flagged. But (as far as I know) this only applies to videos that these users have happened to find and see of their own volition. 

I don’t have the hard data to prove this, but I have a strong suspicion that vast numbers of users would be willing to participate as organized volunteer proactive “screeners” of a sort for YT, especially if there was some even minor financial incentive for their participation (think in terms of a small amount of Play Store credit, for example).

What if public videos that were suddenly attracting significant numbers of views (“significant” yet to be defined), were pushed to some odd number (to avoid ties) of such volunteer viewers who have undergone appropriate online training regarding YT’s Terms of Use? We require that they actually are viewing reasonable amounts of these videos (yes, there would be ways to attempt gaming this, but remember we’re talking about very large numbers of volunteers so much of that risk should wash out if care is used in tracking analysis).

They vote/rate the videos acceptable or not. If the majority vote a video as unacceptable, it gets pushed to the formal G abuse screeners for a decision. If any given volunteer is found over time to be providing bad decisions, they’re dropped from the program.

Most videos would have small enough numbers of views to never even enter this system. But it would provide a middle ground to help deal with videos that are suddenly getting more visibility *before* they can cause big problems, and this technique doesn’t rely on random viewers taking the initiative to flag abusive videos (and for that matter figuring out how to flag them, since flagging is not typically a top level YT user interface element these days, as I’ve previously noted).

Since participants in this program would not have any control over which specific videos they’d be pushed for a vote, and since again we’d be talking about quite large numbers of participants (and we’d be monitoring their performance over time), the ability to purposely claim that nonabusive videos were abusive (or the reverse) would be minimized.

No video would have action taken against it unless it had also been declared abusive by a regular YT screener in the pipeline after the volunteer screeners down-voted a video — providing even more protection.

How to define abusive videos is of course a separate discussion relating directly to the YT Terms of Service, but this could include the kinds of content violations that we all know about in relation to YT (hate speech, dangerous pranks and dares, threats, etc.), and even areas such as obvious obnoxious Content ID evasions (e.g., program/movie video inset boxes against random backgrounds, artificial program run time variations, and so on).

I do realize that this is a fairly radical concept and that there are all manner of details that aren’t considered in this brief summary. But I am increasingly convinced that it’s going to take some sort of new approach to help deal with these problems proactively, and to help forestall governments from moving in and wrecking the wonderful YouTube ecosystem with escalating politically motivated demands and threats.

–Lauren–

The Ethics of Google and the Pentagon Drones

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

UPDATE (May 31, 2018): Google and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

– – –

Many years ago, I was the systems guy who ran the early UNIX minicomputers in the basement of Santa Monica’s RAND Corporation. While RAND at the time derived the vast majority of its income from Department of Defense contracts, I was there despite my lifelong refusal to work directly on military-related projects (to the significant detriment of my own income, I might add). RAND spoke truth to power. DoD could contract with RAND for a report on some given topic, but RAND wouldn’t skew a report to reach results that the contractor had hoped for. I admired that.

One midday I was eating lunch in an open patio between the offices there, chatting with a couple of the military research guys. At the time, one focus of DoD interest was use of mainframe and minicomputer systems to analyze battlefield data, such as it was back then. My lunchmates assured me that their work was all defensive in nature.

I asked how they could be sure that the same analytical systems they intended for defense couldn’t also by used by the military for actually killing people. “We have to trust them,” came the reply. “The technology is inherently dual use.”

It seemed to me that battlefield data analysis was fundamentally different from the DoD-funded projects I also worked on — with ARPANET being the obvious example. Foundational communications research is not in the same category as calculating how to more efficiently kill your enemy. At least that’s how I felt at the time, and I still feel that way. There’s nothing inherently evil in accepting money from DoD — the ethical issues revolve around the specifics of the projects involved.

Fast forward to the controversy that has arisen today, about which I’ve been flooded with queries — word that Google has been engaged in “Project Maven” for DoD, using Google AI/Machine Learning tech to analyze footage from military drones. Apparently this wasn’t widely known even internally at Google, until the topic recently found its way to internal discussion groups and then leaked to the public. Needless to say, there reportedly has been quite considerable internal controversy about this, to say the least.

“How do you feel about this, Lauren?” I’m being asked.

Since I frequently play armchair ethicist, I’ve been giving this question a lot of thought today.

The parallels with that lunch discussion at RAND so long ago seem striking.  The military wanted to analyze battlefield data back then, and they want to analyze military drone data now.

There are no simple answers.

But we can perhaps begin with the problem of innocent civilian deaths resulting from U.S. drone strikes. We know that the designated terrorist targets are frequently purposely embedded in civilian areas, and often travel with civilians who have little or no choice in the matter — such as children and other family members.

While the Pentagon (as they did during the Vietnam war) makes a grand show about body counts, it’s not clear that most of these drone strikes have much long-term anti-terrorism impact. The targets are frequently fungible — kill one leader and another moves right in. Liquidate one bomb maker and the position is quickly filled by another.

So, ethical question #1: Are these drone strikes justifiable at all? To answer this question honestly, we must of course consider the rate of collateral civilian deaths and injuries, which are sure to inspire further anti-U.S. rhetoric and attacks.

My personal belief is that in most cases — at least to the extent that we in the public are aware — the answer to this question is generally no.

Which brings us to ethical question #2 (or rather, a set of questions): Does supplying advanced image processing and analysis systems for military drone data fall into an ethically acceptable category, provided that such analysis is not specifically oriented toward targeting for lethal operations? Can it be reasonably argued that more precise targeting could also help to prevent civilian casualties, even when those civilians are in immediate proximity to the intended targets? Or is providing such facilities also ethical even if direct lethal operations are known in advance to be the likely result, toward the advancement of currently stated U.S. interests?

And after all, much of our technology today can be easily repurposed in ways that we technologists had not intended — for example, for oppressive governments to surveil and censor their own citizens.

Yet the immense potential power of rapidly advancing AI and Machine Learning systems do cast these kinds of issues in a new and qualitatively different kind of light. And that’s even if we leave aside a business-based analysis that some firms might make, noting that if they don’t provide the services, some other company will do so anyway, and get the contracts as well as the income.

I know absolutely nothing about Google’s participation in Project Maven other than what I’ve seen in public sources today.

But to try address the gist of my own questions from just above, based on what I know right now, I believe that Google has a significant ethical quandary on their hands in this regard.

I personally doubt that this kind of powerful tech can be constrained through contractual relationships to purely defensive use. I also feel that the decision regarding whether or not any given firm is willing to accept that its technology may be used for lethal purposes is one that should be made “eyes wide open” —  and is worthy of nothing less than effectively a significant level of company-wide consensus before proceeding.

It has been ages since I even thought about that long ago lunch conversation at RAND. It’s indeed disquieting to be thinking about it again today.

Be seeing you.

–Lauren–

Why I Finally Dumped Netflix (and Love FilmStruck/Criterion)

UPDATE (November 16, 2018):  New, Independent Criterion Channel to Launch Spring 2019

– – –

UPDATE (October 26, 2018): Warner Media — controlled by those sick bastards at AT&T since the horrific merger  — are shutting down FilmStruck on November 29th. AT&T: Always finding new ways to enrich ourselves and screw you. Thank you for using AT&T!

– – –

Yesterday was my last day subscribing to Netflix. Miss them, I will not. I had been meaning to kill the subscription for quite some time, finally pulled the trigger a couple of weeks ago, and the final days ran out at the end of February.

It’s been painful to watch Netflix’s escalating deterioration and hubris. After arguably putting movie rental stores out of business almost single-handedly, Netflix decided that they no longer really cared about classic films.

Netflix CEO Reed Hastings wants to play Hollywood movie mogul for himself. So Netflix has been decimating its online catalog of classic, quality films, and replacing them with a cavalcade of mediocre productions. Their corpus of classic television has been going in the same direction for ages now.

What’s more, Netflix is spending billions of dollars — reportedly $8 billion just this year — to produce its own stream of mostly unwatchable films and series — which they continuously promote through app screensavers and in every other way possible.

It’s gotten to the point that whenever you hear the characteristic loud “thum thum!” that precedes a Netflix production, you know it’s time to move on.

That’s not to say that Netflix doesn’t occasionally produce a quality film or show — but the ratio is awful, and seems to be mostly of the “stopped clock is correct twice a day” variety.

Their “You might like this, Lauren!” recommendations stink. You can dig through their online listings for ages and find nothing even remotely worth your time.

Bye bye Netflix.

Luckily for those of us who care about classic films and quality films in general, there’s a superb online alternative —FilmStruck/Criterion:

https://www.filmstruck.com

FilmStruck is a service of Turner Broadcasting, who also produce the always excellent Turner Classic Movies (TCM) channel, of which I’ve been a fan since its inception many years ago. 

I subscribed to FilmStruck (and their wonderful Criterion Collection add-on) some weeks ago, around the same time that I issued my Netflix cancellation (Netflix vis-a-vis FilmStruck/Criterion pricing are both very similar, by the way). 

One of the best entertainment-related decisions I’ve ever made.

It would be fair to call F/C something of a TCM on super-steroids (and in fact, F/C has just now begun to integrate some new F/C intros from TCM hosts, and classic materials from the TCM archives — super!)

Are there downsides? Well, in all honesty F/C’s website is pretty slow and clunky. Their device apps need significant work. While you can run three simultaneous video streams, there’s no mechanism for separate users per se. 

I don’t care. All of that logistical stuff will certainly improve with time. 

Once the video streams are running they look great. Films are in HD whenever possible and are in reasonable aspect ratios. There are no “ID bugs” on the screen during films (and here I’ll also note that TCM has always had a policy of keeping their ID bugs to an absolute minimum — just a few seconds at a time occasionally during films, which is also very much appreciated).

The depth and breadth of F/C’s superb classic and independent films online catalog are breathtaking.

But there’s a lot more there than the individual movies. There are curated collections of films. Often there are all manner of “extras” — not only the kinds of additional materials familiar from DVDs like commentary tracks, discussions, and other original features, but F/C-produced materials as well.

It really is a classic film lover’s paradise.

What’s more, a few days ago it was announced that Warner Bros. was shutting down their own standalone streaming service, and transferring their vast library of hundreds of classic films to F/C — some of those have already become available and they’re great. I started into them yesterday with “Forbidden Planet” and “Casablanca” — and that’s just barely scratching the surface, of course.

Anyway, you get the idea. If you’re happy with the kind of putrid porridge that has become Netflix’s stock-in-trade these days, more power to you — enjoy.

But if you care about great films, about classic films — I urge you to give FilmStruck/Criterion a try (there’s a 14 day free trial, and you can view via a range of mobile and streaming devices, including Chromecast, Roku, etc.)

Sorry Netflix. That’s show biz!

–Lauren–