Why Big Tech Needs Big Ethics — Right Now!


The Cambridge Analytica user trust debacle currently enveloping Facebook has once again brought into sharp focus a foundational issue that permeates Big Tech — the complex interrelationships between engineering, marketing, and ethics.

I’ve spent many years pounding on this problem, often to be told by my technologist colleagues that “Our job is just to build the stuff — let the politicians figure out the ethics!”

That attitude has always chilled me to the bone — let the *politicians* handle the ethics relating to complicated technologies? (Or anything else for that matter?) Excuse me, are we living on the same planet? On the same timeline? Hello???

So I almost choked on my coffee when I saw articles saying that Facebook was now suggesting the need for government regulation of their operations – aka – “Stop us before we screw our users yet again!”

The last thing we need is the politicians involved. They by and large don’t understand what we’re doing, they generally operate on the basis of image and political expediency. Politicians touching tech is typically poison.

But the status quo of Big Tech is untenable also. Google is a wonderful firm with great ideals, but with continuing user support and accessibility problems. Facebook strikes me, frankly, as having a basically evil business model. Apple is handing user data and crypto keys over to the censoring Chinese dictatorship. Microsoft, and the rest — who the hell knows from day to day?

One aspect that they’ve all shared is the “move fast and break things” mantra of Silicon Valley, and a tendency to operate on the basis that “you never want to ask permission, just apologize later if things go wrong.”

These attitudes just aren’t going to work going forward. These firms (and their users!) are now in the crosshairs of the politicians, who see rigorous regulation of these firms as key to their political futures, and they intend to accomplish this by making Big Tech “the fall guy” for a range of perceived evils — smoothing the ways for various forms of micromanaged, government-imposed information control and censorship.

As we’ve already seen in Russia, China, and even increasingly in Europe, this is indeed the path to tyranny. Assuming that the USA is invulnerable to these forces would be stupidity to the max.

For too long, user support and ethical questions have had second-class status at most tech firms. It’s not that these concerns don’t exist at all, it’s that they’re often very low in the product priority hierarchies.

This must change.

Ethics, user trust, and user support issues must proactively rise to the top of these hierarchies, lest opportunistic politicians leverage the existing situation for the imposition of knee-jerk “solutions” that will not only seriously damage these firms, but will ultimately be devastating to their users and broader communities as well.

There have long existed corporate roles in various “traditional” industries — who long ago learned how to avoid being easily steamrolled by the politicians — to help avoid these dilemmas.

Full-time ethicists and ombudsmen, for example, can play crucial roles in these respects, by helping firms to understand the cross-product, cross-team implications of their projects in relation to internal needs, user requirements, and overall effects on the world at large.

Many Internet-related firms have resisted the idea of accepting these roles within their corporate ranks, believing that their other management and public relations employees can fulfill those functions.

But in reality — and the continuing Facebook privacy disasters are but one set of examples — it takes a specific kind of longitudinal, cross-team approach to seriously, adequately, and successfully address these escalating issues.

Another argument heard against ombudsman and ethicist roles is concerns regarding their supposedly having “veto” power over product decisions. This is a fallacious argument. These roles need not necessarily imply any sort of launch or other veto abilities, and can be purely advisory in terms of internal policy decisions. But having the input of persons with these skill sets in the ongoing decision-making process is still crucial — and lacking at many of these major firms.

The time is short for firms to grasp the nettle in these regards. Politicians around the world — not just in traditional tyrannies — are taking advantage of the publicly perceived ethical and user support problems at these firms.

All through human history, governments have naturally gravitated toward controlling the information available to citizens — sometimes with laudable motives, always with horrific results.

Internet technologies provide governments with a veritable and irresistible “candy store” of possibilities for government-imposed censorship and other information control.

A key step that these firms must take to help stave off such dark outcomes is to move immediately to make Big Ethics a key part of their corporate DNA.

To do otherwise, or even to hesitate toward making such changes, could easily be tantamount to total surrender.

–Lauren–

Seriously, It’s Time to Ditch Facebook and Give Google+ a Try


One might think that with the deluge of news about how Facebook has been manipulating you and violating your privacy — and neglecting to tell you about it — Google would be taking this opportunity to point out that their own Google+ social system is very much the UnFacebook.

But sometimes Google is reticent about tooting their own horn. So what the hell, when it comes to Google+, I’m going to toot it for them.

Frankly, I’ve never trusted Facebook, and current events seem to validate those concerns yet again. Facebook is fundamentally designed to exploit users in particularly devious and disturbing ways (please see: “Fixing Facebook May Be Impossible” – https://lauren.vortex.com/2018/03/18/fixing-facebook-may-be-impossible).

Yet I’ve been quite happily communicating virtually every day with all manner of fascinating people about a vast range of topics over on Google+ (https://plus.google.com/+LaurenWeinstein), since the first day of beta availability back in 2011.

The differences between Facebook and Google+ are numerous and significant. There are no ads on Google+. Nobody can buy their way into your feed or pay Google for priority. Google doesn’t micromanage what you see. Google doesn’t sell your personal information to any third parties.

There’s overall a very different kind of sensibility on G+. There’s much less of people blabbing about the minutiae of their own lives all day long (well, perhaps except when it comes to cats — I plead guilty!), and much more discussion of issues and topics that really matter to more people. There’s much less of an emphasis on hanging around with those high school nitwits whom you despised anyway, and much more a focus on meeting new persons from around the world for intelligent discussions.

Are there any wackos or trolls on G+? Yep, they’re out there, but they never represent more than a small fraction of total interactions, and the tools are available to banish them in short order. 

There is much more of a sense of community among G+ users, without the “I hate it but I use it anyway” feeling so often expressed by Facebook users. Facebook posts all too often seem to be about “me” — G+ posts more typically are about “us” — and tend to be far more interesting as a result.

At this juncture, the Google-haters will probably start to chime in with their usual bizarre conspiracy theories. Other than suggesting that they remove their tinfoil hats so that their scalps can breathe, I can’t do much for them.

Does Google screw up from time to time? Yes. But so does Facebook, and in far, far more egregious ways. Google messes up occasionally and works to correct what went wrong. Unfortunately, not only does Facebook make mistakes, but the entire philosophy of Facebook is dead wrong — a massive, manipulative violation of users’ personal information and communications on a gargantuan scale. There simply is no comparison.

And I’ll note here what should be obvious — I wouldn’t use G+ (or other Google services) if I weren’t satisfied with the ways that they handle my data. Having consulted to Google, I have a pretty decent understanding of how this works, and I know many members of their world-class privacy team personally. If only most firms gave their customers the kinds of control over their data that Google does (“The Google Page That Google Haters Don’t Want You to Know About” – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about).

But whether or not you decide to try Google+, please don’t keep playing along with Facebook’s sick ecosystem. Facebook has been treating its users like suckers since day one, and there’s damned little to suggest that they’re moving in other than an increasingly awful trajectory. 

And that’s the truth.

–Lauren–

Fixing Facebook May Be Impossible


In the realm of really long odds, let’s imagine that Facebook CEO Mark Zuckerberg contacted me with this request: “Lauren, we’re in big trouble over here. I’ll do anything that you suggest to get Facebook back on the road of righteousness! Just name it and it’ll be done!”

Beyond the fact that this scenario is even less likely than Donald Trump voluntarily releasing his tax returns (though perhaps not by much!), I’m unsure that I’d have any practical ideas to help out Zuck.

The foundational problem is that any solutions with any significant chance of success would mean fundamentally changing the Facebook ecosystem in ways that would probably make it almost unrecognizable compared with their existing status quo.

Facebook is founded and structured almost entirely on the concept of straitjacketing users into narrow “walled gardens” of information, tailoring on an individual basis what they see in the most manipulative ways possible.

Perhaps even worse, Facebook permits posts to be “promoted” — that is, being visible in users’ feeds when they might not otherwise have appeared in those feeds — if you pay Facebook enough money.

Contrasting these fundamentals with Google’s social media operations is instructive.

For example, while you can buy ads to appear in conjunction with search results on Google (but never mixed in with the organic results themselves), there are no ads on Google+, nor is there any way to pay Google to promote Google+ posts.

Google’s major focus — their 20th birthday is this year — has always been on making the most information possible available in an organized way — the explicit goal of Google’s founding duo.

On the other hand, Facebook’s focus has always centered on tightly supervising and controlling the information that their victims — oops, sorry — users see. Given that Zuck originally founded Facebook as a means to avoid dating what he considered to be “ugly” women, we shouldn’t be at all surprised.

I’ve never had an active Facebook account (I do have a “stealth” account that I use so that I can survey Facebook pages, user interfaces, and similar aspects of the service that are only available to logged-in users — but I never post anything there.)

Yet I’ve never felt in any way deprived by not being an active Facebook user.

I frequently hear from people who tell me that they really hate Facebook, but that they keep using it because their friends or relatives don’t want to bother communicating with them any other way. That’s just … sad. 

But it’s not a valid excuse in the long run.

Perhaps even more to the point today, Facebook’s operating model makes it enormously vulnerable to ongoing manipulation by Russia and its affiliated entities (such as Donald Trump, his campaign, and his minions) toward undermining western democracies. 

Crucially though, this vulnerability is not the result of an accidental flaw in Facebook’s design. Rather, Facebook’s entire ecosystem is predicated on encouraging the manipulation of its users by third parties who posses the skills and financial resources to leverage Facebook’s model. 

These are not aberrations at Facebook — they are exactly how Facebook was designed to operate. As the saying goes: “Working as intended!”

Yes, I could probably make some useful suggestions to Zuck. Ways to vastly improve their abysmal privacy practices. Reminding them that lying to regulators is always a bad idea. And an array of other positive propositions. 

But the reality is that for Facebook to actually, seriously implement these would entail a wholesale restructuring of what Facebook does and what they currently represent as a firm — and it’s almost impossible to see that voluntarily happening.

So I really just don’t have any good news for Zuck along these lines.

And that’s the truth.

–Lauren–

The Controversial CLOUD Act: Privacy Plus or Minus?


Over the last few days you may have seen a bunch of articles about the “CLOUD Act” — recently introduced U.S. bipartisan legislation that would overhaul key aspects of how foreign government requests for the data of foreign persons held on the servers of U.S. companies would be handled.

I’m being frequently asked for my position on this, and frankly the analysis has not been a simple one.

Opponents, including EFF, the ACLU, and a variety of other privacy and civil right groups, are opposing the legislation, arguing that it eases access to such data by foreign governments and represents a dangerous erosion of privacy rights.

Proponents, including Apple, Facebook, Google, Microsoft, and Oath (Yahoo/Verizon) argue that the CLOUD Act provides much needed clarity to the technically and legally confused mess regarding transborder data requests, and introduces new privacy and transparency protections of its own.

One thing is for sure — the current situation IS a mess and completely unsustainable going forward, with ever escalating complicated legal entanglements (e.g. the ongoing Microsoft Ireland case, with a pending Supreme Court decision likely to go against Microsoft’s attempts at promoting transborder privacy) and ever more related headaches in the future.

Cutting to the chase, I view the CLOUD Act as flawed and imperfect, but still on balance a useful effort at this time to move the ball forward in an exceedingly volatile global environment.

This is particularly true given my concerns about foreign governments’ increasing demands for “data localization” — where their citizens’ data would be stored under conditions that would frequently be subject to far fewer privacy protections than would be available under either current U.S. law or the clarified provisions of the CLOUD Act. In the absence of the CLOUD Act, such demands are certain to rapidly accelerate.

One of the more salient discussions of the CLOUD Act that I’ve seen lately is: “Why the CLOUD Act is Good for Privacy and Human Rights” (https://www.lawfareblog.com/why-cloud-act-good-privacy-and-human-rights). Regardless of how you feel about these issues, the article is well worth reading.

Let’s face it — nothing about the Net is simple.

–Lauren–

Why YouTube’s New Plan to Debunk Conspiracy Videos Won’t Work


YouTube continues to try figure out ways to battle false conspiracy videos that rank highly on YouTube — sometimes even into the top trending lists — and that can spread to ever more viewers via YouTube’s own “recommended videos” system. I’ve offered a number of suggestions for dealing with these issues, most recently in “Solving YouTube’s Abusive Content Problems — via Crowdsourcing” (https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing).

YouTube has now announced a new initiative that they’re calling “information cues” — which they hope will address some of these problems.

Unfortunately, this particular effort (at least as being reported today) is likely doomed to be almost entirely ineffective.

The idea of “information cues” is to provide false conspiracy YouTube videos with links to Wikipedia pages that “debunk” those conspiracies. So, for example, a video claiming that the Florida student shooting victims were actually “crisis actors” would presumably show a link to a Wikipedia page that explains why this wasn’t actually the case.

You probably already see the problems with this approach.

We’ll start with the obvious elephant in the room. The kind of viewers who are going to believe these kinds of false conspiracy videos are almost certainly going to say that the associated Wikipedia articles are wrong, that they’re planted lies. FAKE NEWS!

Do we really believe that anyone who would consider giving such videos even an inch of credibility is going to be convinced otherwise by Wikipedia pages? C’mon! If anything, such Wikipedia pages may actually serve to enforce these viewers’ beliefs in the original false conspiracy videos!

Not helping matters at all is that Wikipedia’s reputation for accuracy — never all that good — has been plunging in recent years, sometimes resulting in embarrassing Knowledge Panel errors for Google in search results.

Any Wikipedia page that is not “protected” — that is, where the ordinary change process has been locked out — is subject to endlessly mutating content editing wars — and you can bet that any editable Wikipedia pages linked by YouTube from false conspiracy videos would become immediate high visibility targets for such attacks.

If there’s one thing that research into this area has already shown quite conclusively, it’s that the people who believe these kinds of garbage conspiracy theories are almost entirely unconvinced by any factual information that conflicts with their inherent points of view.

The key to avoiding the contamination caused by these vile, lying, false conspiracy videos is to minimize their visibility in the YouTube/Google ecosystem in the first place.

Not only should they be prevented from ever getting into the trending lists, they should be deranked, demonetized, and excised from the YouTube recommended video system. They should be immediately removed from YouTube entirely if they contain specific attacks against individuals or other violations of the YouTube Terms of Service and/or Community Guidelines. These actions must be taken as rapidly as possible with appropriate due diligence, before these videos are able to do even more damage to innocent parties.

Nothing less can keep such disgusting poison from spreading.

–Lauren–