How AI Could Save Us All

“And pray that there’s intelligent life somewhere up in space, ’cause there’s bugger all down here on Earth.” —
“The Galaxy Song” (“Monty Python’s The Meaning of Life” – 1983).

– – –

It’s very popular to trash “artificial intelligence” (AI) these days.

While reasoned warnings regarding how AI-based systems could be abused (and/or generate inappropriately “biased” decisions) are appropriate, various folks in the public eye — some of whom really should know better — have been proclaiming nightmare scenarios of AI relegating we mere humans to the status of pets, slaves, or worse — perhaps “batteries” as in “The Matrix” (1999). Or maybe just fertilizer for decorative displays.

There’s certainly a long history of cinematic representations of “intelligent” systems run amok. Earlier than “The Matrix” we saw a computer lethally hijack a space mission (“Hal” in “2001: A Space Odyssey” – 1968); another computer imprison, rape, and impregnate a woman (“Proteus” in “Demon Seed” – 1977); and a pair of computing systems take over the world (“Colossus” and “Guardian” in “Colossus: The Forbin Project” – 1970). And of course there’s the scorched Earth world of “Skynet” in “Terminator” (1984), and hybrid threats that may be even scarier, like the “Borg” from the “Star Trek” universe. And more, many more.

All of these cultural references have a real impact on how we think about AI today. We’re predisposed to be fearful of systems that we believe might be “smarter” in some ways than we are.

Monty Python may have been partly correct all along. In a world where a moronic creature like Donald Trump can be elected to the most powerful role on the planet, we should probably be seeking out intelligence to augment our own, wherever we can find it.

Seriously, since it could be a long, long time (if ever) before we hear from interstellar civilizations (and Stephen Hawking’s prediction that this might be a seriously losing proposition for humanity could indeed be accurate), we need to concentrate on intelligence augmentation systems that we can build ourselves.

The word “augmentation” is crucial here. The human brain is a marvel in so many creative and imaginative ways. But it’s easily overwhelmed by data, subject to disruptive distractions, and is ill-suited to solving critical planetary-scale problems on its own.

The key to a happy coexistence between humans and AI systems — even advanced AI systems — is to keep in sharp focus where we excel and where the AI systems that we develop can be most effectively and successfully deployed.

Two ways that we can get into trouble are by trying to use AI and “expert systems” as shortcuts to solve problems for which they aren’t actually suited, or by assuming that the data that we provide to these systems is always accurate and fair, when in some cases it’s actually biased and unfair (we’ve seen this problem already occur in some systems that attempt to predict criminal recidivism, for example). The computing adage “garbage in, garbage out” is still true today, just as it was in the ancient era of punched card computing.

Obviously, we don’t want to screw this up. There are real challenges and significant (but fascinating!) issues to be solved in the ongoing development and deployment of AI systems going forward, and in helping non-technical persons to better understand what these systems are really about and how they could actually improve their lives for the better.

And to the extent that we can concentrate on the real world of AI — and less on dramatic “doom and gloom” scenarios straight out of the movies — I believe that we’ll all be better off.

–Lauren–

Where I Stand on the Proposed Merger of T-Mobile and Sprint

UPDATED (May 26, 2018): REVOKING MY SUPPORT FOR THIS MERGER: With word yesterday that T-Mobile is paying duplicitous, lying fascists like former Trump campaign manager and current confidant Corey Lewandowski — and other members of the same consulting firm — for “how to kiss up to sociopathic, racist Donald Trump” advice, I hereby revoke my support for this merger. On its own terms, in an isolated universe, it makes sense. But if the cost of success for the merger is this kind of disgusting kowtowing and feeding of the beast, then the price is far too high. T-Mobile CEO John Legere has one hell of a lot to answer for on this one. ANYTHING for the merger, right John? The road to hell is paved with attitudes like yours.

– – –

Some proposed mergers are disasters for consumers. Back in 2011, AT&T tried to merge with T-Mobile, sending a chill down the spine of longtime T-Mobile subscribers like me (I’ve been with T-Mobile since the first day of Google Android availability with the original “G1” phone — now nearly 10 years ago). Twice before, I’d been unwillingly dragged into AT&T mobile services by mergers.

The proposed merger of AT&T and T-Mobile was abandoned when the Obama Justice Department wisely filed to block it.

In the years since, T-Mobile and Sprint have had an on-again, off-again courtship regarding a potential merger. Today they announced a definitive agreement to actually merge. Even under Trump, regulatory approval of the merger (which could take at least a year) is by no means guaranteed, since it would reduce the number of major mobile carriers in the USA from four to three.

I am, however, fairly sanguine about this merger proposal based on the descriptions I’ve seen this morning. The combined company will be firmly under T-Mobile’s control, with T-Mobile’s current CEO and COO retaining their positions, and the combined entity reportedly named — you guessed it — T-Mobile. Magenta for the win!

And frankly, at this stage of the game, I see this combined firm as being the most effective practical competition against the serious telecom bullies like AT&T, Verizon, Comcast, and Charter.

The devil is always in the details, but at least the potential for this merger ultimately being significantly consumer-positive seems to be in the cards.

We shall see.

–Lauren–

My Initial Impressions of Google’s New Gmail User Interface

Google launched general access to their first significant Gmail user interface (UI) redesign in many years today. It’s rolling out gradually — when it hits your account you’ll see a “Try the new Gmail” choice under the settings (“gear”) icon on the upper right of the page (you can also revert to the “classic” interface for now, via the same menu).

But you probably won’t need to revert. Google clearly didn’t want to screw up Gmail, and my initial impression is that they’ve succeeded by avoiding radical changes in the UI. I’ll bet that some casual Gmail users might not even immediately notice the differences.

This will all come as a great relief to many Gmail users, who have watched with increasing alarm the recent march of Google UIs toward low contrast designs that are difficult for many persons to read (e.g. as discussed in “Does Google Hate Old People?” – https://lauren.vortex.com/2017/02/06/does-google-hate-old-people).

I certainly won’t take credit for Gmail not falling into that kind of design trap, but perhaps Google has indeed been taking some previously stated concerns to heart.

The new Gmail UI is what we could call a “minimally disruptive” redesign of the now “classic” version. The overall design is not altered in major respects. So far I haven’t found any notable missing features, options, or settings. My impression is that the back end systems serving Gmail are largely unchanged. Additionally, there are a number of new features (some of which are familiar in design from Google’s “Inbox” email interface) that are now surfaced for the new Gmail.

Crucially, overall readability and usability (including contrast, font choices, UI selection elements, etc.) seem so close to classic Gmail (at least in my limited testing so far) as to make any differences essentially inconsequential. And it’s still possible to select a dark theme from settings if you wish, which results in even higher contrast.

So overall, my sense is that Google has done an excellent job with this interface refresh, and I’m hoping that the philosophy leading to this design — particularly in terms of user interface readability and ease of use — will carry over to other Google products and services going forward.

My kudos to the Gmail team!

–Lauren–

Google Reportedly Plans New Protections for YouTube Kids — Let’s Get Them Right!


Reports are circulating that Google plans to implement some important new protections for their YouTube Kids offering, in particular providing a means for parents to ensure that their children only see videos that have been human-curated and/or are from appropriately trusted YouTube channels.

The goal would be to avoid children being exposed to the kinds of sick garbage that currently still manages to seep into YouTube Kids recommendation engine suggested videos.

I have been calling for exactly this kind of approach for YouTube Kids, and I applaud such efforts by the YouTube team.

However, if some details of these reports are accurate, there are a couple of important provisos that I must mention.

First, the “curated/trusted” YouTube Kids video mode will supposedly be an opt-in feature — needing to be explicitly enabled (e.g., by parents).

By default, children would reportedly continue to see the algorithmic recommendations complete with the creepy contamination.

Since we’re dealing with kids viewing videos, not adults, this new human-curated mode should absolutely be the default, which could optionally be disabled by parents if they really wanted their children to see the full algorithmic flow.

The calculus when determining appropriate defaults is entirely different for children, and depending on busy parents to pay attention to these kinds of settings is problematic at best, so this is a situation where the most ethical and responsible action on Google’s part would be for the “safest” settings to prevail as defaults.

Secondly, it’s crucial in the long run that the same YouTube Kids features and content options are ultimately available not only as mobile apps but on ordinary browser platforms as well.  Most children don’t limit their video viewing only to phones!

All that said, if Google is indeed moving ahead toward human-curated and approved YouTube Kids video suggestions, this is a notably positive step, and would be an important acknowledgment by Google that in some cases, algorithms alone are insufficient to adequately deal with our complex online content ecosystems.

–Lauren–

How YouTube’s Ad Restrictions Have Gone Very Wrong

In the wake of the horrific shooting attack at YouTube headquarters, global attention has been drawn to Google’s content and monetization policies for YT, since the shooter apparently had a number of public grievances against YT in these regards (“Tragically, the YouTube Shooting Attack Is Not a Complete Surprise” – https://lauren.vortex.com/2018/04/04/tragically-the-youtube-shooting-attack-is-not-a-complete-surprise).

Part of what makes this all confusing is that Google’s recent series of YT policy changes — popularly called “Adpocalypse” — has included a number of different elements, some of which appear to have been much more appropriate than others.

The result is that many YT users who’ve been playing by the rules have been unfairly tossed into the dumpster along with the real abusers.

For example, I support Google’s moves to crack down (via demonetization and/or removal) on YT videos/channels that contain hate speech or other content that is clearly in violation of YT Terms of Service or Community Standards. In fact, I feel that Google has not gone far enough in some respects to deal with specific categories of violating, potentially dangerous content (“On YouTube, What Potentially Deadly Challenge Will Be Next?” – https://lauren.vortex.com/2018/04/02/on-youtube-what-potentially-deadly-challenge-will-be-next). I’ve also proposed techniques to help quickly detect truly abusive content (“Solving YouTube’s Abusive Content Problems — via Crowdsourcing” – https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing).

But along the way, Google made the misguided decision to drastically curtail which YT users could run ads to monetize their videos, essentially slapping “the little guys” in their faces. These users’ ads never brought in much money by Google standards, but every dollar counts to ordinary folks like you and me!

Why did Google do this? I suspect that they felt this to be a convenient time to shed the large number of small uploaders who didn’t bring in much revenue to Google. And conveniently, Google could argue (largely disingenuously, I believe)  that this was actually part of their broader anti-abuse efforts as well.

One can understand why Google would prefer not to bother evaluating small YT channels for terms compliance. But the reality is that the worst abusers often have among the largest YT followings — sometimes with millions of subscribers and/or large numbers of video views. 

By virtue of these very non-Googley and significantly draconian monetization restrictions applied to small, completely non-abusing YT channels and users, vast numbers of innocents are being condemned as if they were guilty. 

–Lauren–

Tragically, the YouTube Shooting Attack Is Not a Complete Surprise

I didn’t get much sleep last night. For many years I’ve feared the kind of attack that occurred at YouTube headquarters yesterday. Employees severely injured — the shooter dead by her own hand.

I’ve spent time looking over the attacker’s online materials — her website and available videos.

What’s immediately clear is that she had smoldering grievances against Google’s YouTube, that exploded yesterday in a rampage of innocent blood and her own self-destruction. Her father apparently knew that she “hated YouTube” — and had warned police that she might be headed there.

Google will no doubt bolster its physical security in the wake of this tragedy, but of course that merely pushes the zone of risk out to the perimeters of their secure areas.

Haunting me regarding the shooter’s online statements is that one way or another, I’ve seen or heard so much similar to them, so many times before.

For many years, Google and YouTube users have come to me in desperation when they felt that their problems or grievances were being ignored by Google. If you’ve been reading my posts for any significant length of time, you’ve seen me discussing these matters on numerous occasions.

The common thread in the stories that I hear from these users — usually by email, sometimes by phone — are feelings of frustration, of desperation, of an inability to communicate with Google — to get what they consider to be at least a “fair shake” from the firm when they have Google-related problems.

I’ve not infrequently pondered the possibility that one day, an upset, desperate Google user would become violent, potentially with deadly results especially given the flood of easily available firearms in this country.

YouTube related issues have typically been a big chunk of these user concerns brought to me, as have been Google account access issues generally. I’ve tried to help these users when I could, e.g., please see: “The Google Account  ‘Please Help Me!’ Flood” – https://lauren.vortex.com/2017/09/12/the-google-account-please-help-me-flood – and many other posts.

For well over a decade (most recently late last month) — both publicly and directly to Google — I’ve repeatedly urged the creation of Google “ombudsman” or similar roles, to provide more empowered escalation and internal policy analysis paths, and to help provide an “escape valve” for better dealing with the more serious user issues that arise. Just a couple of my related posts include:

“Why Big Tech Needs Big Ethics — Right Now!” – https://lauren.vortex.com/2018/03/24/why-big-tech-needs-big-ethics-right-now

“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today” – https://lauren.vortex.com/2017/04/03/google-needs-an-ombudsman-posts-from-2009-still-relevant-today

Google has always rejected such calls for ombudsmen or similar roles. Google has said that ombudsmen might have too much power (this definitely need not be the case — these roles can be defined in a wide variety of ways). Google has insisted that ombudsman concepts couldn’t scale adequately to their ecosystem (yet other firms with very large numbers of customers have managed to employ these concepts successfully for many decades).

The reality is that Google — filled to the brim with some of the smartest and most capable people on the planet — COULD make this work if they were willing to devote sufficient time and resources to structuring such roles appropriately.

Google’s communications with their users — along with related support and policy issues — have always collectively been Google’s Achilles’ heel.

While one would be reasonable to assume that the number of aggrieved Google users inclined to physically attack Google and Googlers is extremely limited, the fact remains that desperate people driven over the edge can be expected to sometimes take desperate actions. This is not by any means to excuse such horrific actions — but these are the facts.

Google and its services have become integral parts of people’s lives — in some cases more so than even their own families.

Google turns 20 this year. It’s time for Google to truly take responsibility for these issues and to grow up.

–Lauren–

On YouTube, What Potentially Deadly Challenge Will Be Next?

Sometimes I just can’t figure out Google. A great company. Great people. But on some issues they’re just so incredibly, even dangerously “tone deaf” to serious risks that persist on their platforms.

You’ve probably already gotten tired of my discussions regarding the dangerous prank-dare-challenge videos on YouTube, e.g. “A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly” – https://lauren.vortex.com/2017/12/17/youtube-prank-dare-vast-disgusting-potentially-deadly — and related posts.

So as if the dangerous “laxative prank” and Tide Pods Challenge and an array of other nightmarish YouTube-based efforts to achieve social media fame weren’t bad enough, we now are seeing a resurgence of the even more potentially disastrous “condom snorting” videos. If you haven’t heard of this one before, you probably shouldn’t investigate the topic shortly after eating.

The usual monsters of the Internet are already proclaiming this to be much ado about nothing, pointing out that it’s not a new phenomenon, even though it has suddenly achieved viral visibility again thanks mainly to YouTube. The usual sick statements like “let natural selection take its course” and “Darwin Award at work!” are also spewing from these child-hating trolls. 

I wonder how many impressionable youths seduced into sickness or even death by these categories of videos would be viewed as too many by these sick minds?  Five? Five hundred? 

NO! One is too many!

Because by and large, these videos shouldn’t exist on YouTube at all. 

But the trolls are right about one thing — many of these videos have been on YouTube for quite some time, gradually accumulating large numbers of views along the way. And when they suddenly “pop” and go viral, they’re like landmines that have finally exploded.

These videos clearly and absolutely violate Google’s YouTube Terms of Service by demonstrating unquestionably dangerous acts.

And they’re usually trivial to find via simple YouTube searches — often in vast quantities using obvious keywords.  Since I can find them — since kids can find them — Google could certainly find them, if it really wanted to.

Google has made significant strides toward demonetizing or eliminating various forms of hate speech from YouTube. But for some reason, they seem to continue dragging their collective feet in the category of dangerous challenge, dare, and prank videos.

Google can fix this. Google MUST fix this. There simply aren’t valid excuses for this continuing, dangerous pestilence that is contaminating YouTube — one of my favorite sites on the Net — and in the process providing governments around the planet with more excuses to push knee-jerk censorship that will harm us all.

C’mon Google, please get your ass in gear, and get that crap off of YouTube. 

No more excuses. Enough is enough.

–Lauren–

EU to Domain Owners in the UK: Drop Dead!


If there were ever any remaining questions about the cruel pettiness of European Union bureaucrats and politicians — as if their use of extortionist tactics against firms like Google, and the implementation of horrific global censorship regimes like “Right To Be Forgotten” weren’t enough — the latest chapter in EU infamy should eliminate any lingering doubts.

The European Commission has now issued an edict that the over 300 thousand UK-based businesses and other UK owners of dot-EU (.eu) domain names will be kicked off of their domains — and in many cases have their websites and businesses wrecked as a result — due to Brexit.

One might readily acknowledge that the UK’s pursuit of Brexit was a historically daft and self-destructive idea, but it took the EU to treat UK businesses caught in the middle as if they were victims from one of the torture-porn “SAW” movies. The more blood and pain the merrier, right gents?

The EU pronouncement is loaded with legalistic mumbo-jumbo, but is being widely interpreted as not only saying that UK entities can’t register or even renew existing dot-EU domains after about a year from now, but that perhaps even existing registrations might be terminated as of that date as well — apparently with no right of appeal.

There’s talk that there might be a small chance of negotiations to avert some of this. But the mere fact that the EC would issue such a statement — completely at odds with the way that domain transition issues have been routinely handled on the Internet for decades — gives us vast insight into the cosmic train wreck represented by increased European Union influence over Internet policies and operations.

Just when you begin to think that the EU can’t come up with an even worse way of wrecking the Net, they fool us once again with ever more awful new lows.

Congratulations!

–Lauren–

Why Big Tech Needs Big Ethics — Right Now!

The Cambridge Analytica user trust debacle currently enveloping Facebook has once again brought into sharp focus a foundational issue that permeates Big Tech — the complex interrelationships between engineering, marketing, and ethics.

I’ve spent many years pounding on this problem, often to be told by my technologist colleagues that “Our job is just to build the stuff — let the politicians figure out the ethics!”

That attitude has always chilled me to the bone — let the *politicians* handle the ethics relating to complicated technologies? (Or anything else for that matter?) Excuse me, are we living on the same planet? On the same timeline? Hello???

So I almost choked on my coffee when I saw articles saying that Facebook was now suggesting the need for government regulation of their operations – aka – “Stop us before we screw our users yet again!”

The last thing we need is the politicians involved. They by and large don’t understand what we’re doing, they generally operate on the basis of image and political expediency. Politicians touching tech is typically poison.

But the status quo of Big Tech is untenable also. Google is a wonderful firm with great ideals, but with continuing user support and accessibility problems. Facebook strikes me, frankly, as having a basically evil business model. Apple is handing user data and crypto keys over to the censoring Chinese dictatorship. Microsoft, and the rest — who the hell knows from day to day?

One aspect that they’ve all shared is the “move fast and break things” mantra of Silicon Valley, and a tendency to operate on the basis that “you never want to ask permission, just apologize later if things go wrong.”

These attitudes just aren’t going to work going forward. These firms (and their users!) are now in the crosshairs of the politicians, who see rigorous regulation of these firms as key to their political futures, and they intend to accomplish this by making Big Tech “the fall guy” for a range of perceived evils — smoothing the ways for various forms of micromanaged, government-imposed information control and censorship.

As we’ve already seen in Russia, China, and even increasingly in Europe, this is indeed the path to tyranny. Assuming that the USA is invulnerable to these forces would be stupidity to the max.

For too long, user support and ethical questions have had second-class status at most tech firms. It’s not that these concerns don’t exist at all, it’s that they’re often very low in the product priority hierarchies.

This must change.

Ethics, user trust, and user support issues must proactively rise to the top of these hierarchies, lest opportunistic politicians leverage the existing situation for the imposition of knee-jerk “solutions” that will not only seriously damage these firms, but will ultimately be devastating to their users and broader communities as well.

There have long existed corporate roles in various “traditional” industries — who long ago learned how to avoid being easily steamrolled by the politicians — to help avoid these dilemmas.

Full-time ethicists and ombudsmen, for example, can play crucial roles in these respects, by helping firms to understand the cross-product, cross-team implications of their projects in relation to internal needs, user requirements, and overall effects on the world at large.

Many Internet-related firms have resisted the idea of accepting these roles within their corporate ranks, believing that their other management and public relations employees can fulfill those functions.

But in reality — and the continuing Facebook privacy disasters are but one set of examples — it takes a specific kind of longitudinal, cross-team approach to seriously, adequately, and successfully address these escalating issues.

Another argument heard against ombudsman and ethicist roles is concerns regarding their supposedly having “veto” power over product decisions. This is a fallacious argument. These roles need not necessarily imply any sort of launch or other veto abilities, and can be purely advisory in terms of internal policy decisions. But having the input of persons with these skill sets in the ongoing decision-making process is still crucial — and lacking at many of these major firms.

The time is short for firms to grasp the nettle in these regards. Politicians around the world — not just in traditional tyrannies — are taking advantage of the publicly perceived ethical and user support problems at these firms.

All through human history, governments have naturally gravitated toward controlling the information available to citizens — sometimes with laudable motives, always with horrific results.

Internet technologies provide governments with a veritable and irresistible “candy store” of possibilities for government-imposed censorship and other information control.

A key step that these firms must take to help stave off such dark outcomes is to move immediately to make Big Ethics a key part of their corporate DNA.

To do otherwise, or even to hesitate toward making such changes, could easily be tantamount to total surrender.

–Lauren–

Seriously, It’s Time to Ditch Facebook and Give Google+ a Try

One might think that with the deluge of news about how Facebook has been manipulating you and violating your privacy — and neglecting to tell you about it — Google would be taking this opportunity to point out that their own Google+ social system is very much the UnFacebook.

But sometimes Google is reticent about tooting their own horn. So what the hell, when it comes to Google+, I’m going to toot it for them.

Frankly, I’ve never trusted Facebook, and current events seem to validate those concerns yet again. Facebook is fundamentally designed to exploit users in particularly devious and disturbing ways (please see: “Fixing Facebook May Be Impossible” – https://lauren.vortex.com/2018/03/18/fixing-facebook-may-be-impossible).

Yet I’ve been quite happily communicating virtually every day with all manner of fascinating people about a vast range of topics over on Google+ (https://plus.google.com/+LaurenWeinstein), since the first day of beta availability back in 2011.

The differences between Facebook and Google+ are numerous and significant. There are no ads on Google+. Nobody can buy their way into your feed or pay Google for priority. Google doesn’t micromanage what you see. Google doesn’t sell your personal information to any third parties.

There’s overall a very different kind of sensibility on G+. There’s much less of people blabbing about the minutiae of their own lives all day long (well, perhaps except when it comes to cats — I plead guilty!), and much more discussion of issues and topics that really matter to more people. There’s much less of an emphasis on hanging around with those high school nitwits whom you despised anyway, and much more a focus on meeting new persons from around the world for intelligent discussions.

Are there any wackos or trolls on G+? Yep, they’re out there, but they never represent more than a small fraction of total interactions, and the tools are available to banish them in short order. 

There is much more of a sense of community among G+ users, without the “I hate it but I use it anyway” feeling so often expressed by Facebook users. Facebook posts all too often seem to be about “me” — G+ posts more typically are about “us” — and tend to be far more interesting as a result.

At this juncture, the Google-haters will probably start to chime in with their usual bizarre conspiracy theories. Other than suggesting that they remove their tinfoil hats so that their scalps can breathe, I can’t do much for them.

Does Google screw up from time to time? Yes. But so does Facebook, and in far, far more egregious ways. Google messes up occasionally and works to correct what went wrong. Unfortunately, not only does Facebook make mistakes, but the entire philosophy of Facebook is dead wrong — a massive, manipulative violation of users’ personal information and communications on a gargantuan scale. There simply is no comparison.

And I’ll note here what should be obvious — I wouldn’t use G+ (or other Google services) if I weren’t satisfied with the ways that they handle my data. Having consulted to Google, I have a pretty decent understanding of how this works, and I know many members of their world-class privacy team personally. If only most firms gave their customers the kinds of control over their data that Google does (“The Google Page That Google Haters Don’t Want You to Know About” – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about).

But whether or not you decide to try Google+, please don’t keep playing along with Facebook’s sick ecosystem. Facebook has been treating its users like suckers since day one, and there’s damned little to suggest that they’re moving in other than an increasingly awful trajectory. 

And that’s the truth.

–Lauren–

Fixing Facebook May Be Impossible

In the realm of really long odds, let’s imagine that Facebook CEO Mark Zuckerberg contacted me with this request: “Lauren, we’re in big trouble over here. I’ll do anything that you suggest to get Facebook back on the road of righteousness! Just name it and it’ll be done!”

Beyond the fact that this scenario is even less likely than Donald Trump voluntarily releasing his tax returns (though perhaps not by much!), I’m unsure that I’d have any practical ideas to help out Zuck.

The foundational problem is that any solutions with any significant chance of success would mean fundamentally changing the Facebook ecosystem in ways that would probably make it almost unrecognizable compared with their existing status quo.

Facebook is founded and structured almost entirely on the concept of straitjacketing users into narrow “walled gardens” of information, tailoring on an individual basis what they see in the most manipulative ways possible.

Perhaps even worse, Facebook permits posts to be “promoted” — that is, being visible in users’ feeds when they might not otherwise have appeared in those feeds — if you pay Facebook enough money.

Contrasting these fundamentals with Google’s social media operations is instructive.

For example, while you can buy ads to appear in conjunction with search results on Google (but never mixed in with the organic results themselves), there are no ads on Google+, nor is there any way to pay Google to promote Google+ posts.

Google’s major focus — their 20th birthday is this year — has always been on making the most information possible available in an organized way — the explicit goal of Google’s founding duo.

On the other hand, Facebook’s focus has always centered on tightly supervising and controlling the information that their victims — oops, sorry — users see. Given that Zuck originally founded Facebook as a means to avoid dating what he considered to be “ugly” women, we shouldn’t be at all surprised.

I’ve never had an active Facebook account (I do have a “stealth” account that I use so that I can survey Facebook pages, user interfaces, and similar aspects of the service that are only available to logged-in users — but I never post anything there.)

Yet I’ve never felt in any way deprived by not being an active Facebook user.

I frequently hear from people who tell me that they really hate Facebook, but that they keep using it because their friends or relatives don’t want to bother communicating with them any other way. That’s just … sad. 

But it’s not a valid excuse in the long run.

Perhaps even more to the point today, Facebook’s operating model makes it enormously vulnerable to ongoing manipulation by Russia and its affiliated entities (such as Donald Trump, his campaign, and his minions) toward undermining western democracies. 

Crucially though, this vulnerability is not the result of an accidental flaw in Facebook’s design. Rather, Facebook’s entire ecosystem is predicated on encouraging the manipulation of its users by third parties who posses the skills and financial resources to leverage Facebook’s model. 

These are not aberrations at Facebook — they are exactly how Facebook was designed to operate. As the saying goes: “Working as intended!”

Yes, I could probably make some useful suggestions to Zuck. Ways to vastly improve their abysmal privacy practices. Reminding them that lying to regulators is always a bad idea. And an array of other positive propositions. 

But the reality is that for Facebook to actually, seriously implement these would entail a wholesale restructuring of what Facebook does and what they currently represent as a firm — and it’s almost impossible to see that voluntarily happening.

So I really just don’t have any good news for Zuck along these lines.

And that’s the truth.

–Lauren–

The Controversial CLOUD Act: Privacy Plus or Minus?


Over the last few days you may have seen a bunch of articles about the “CLOUD Act” — recently introduced U.S. bipartisan legislation that would overhaul key aspects of how foreign government requests for the data of foreign persons held on the servers of U.S. companies would be handled.

I’m being frequently asked for my position on this, and frankly the analysis has not been a simple one.

Opponents, including EFF, the ACLU, and a variety of other privacy and civil right groups, are opposing the legislation, arguing that it eases access to such data by foreign governments and represents a dangerous erosion of privacy rights.

Proponents, including Apple, Facebook, Google, Microsoft, and Oath (Yahoo/Verizon) argue that the CLOUD Act provides much needed clarity to the technically and legally confused mess regarding transborder data requests, and introduces new privacy and transparency protections of its own.

One thing is for sure — the current situation IS a mess and completely unsustainable going forward, with ever escalating complicated legal entanglements (e.g. the ongoing Microsoft Ireland case, with a pending Supreme Court decision likely to go against Microsoft’s attempts at promoting transborder privacy) and ever more related headaches in the future.

Cutting to the chase, I view the CLOUD Act as flawed and imperfect, but still on balance a useful effort at this time to move the ball forward in an exceedingly volatile global environment.

This is particularly true given my concerns about foreign governments’ increasing demands for “data localization” — where their citizens’ data would be stored under conditions that would frequently be subject to far fewer privacy protections than would be available under either current U.S. law or the clarified provisions of the CLOUD Act. In the absence of the CLOUD Act, such demands are certain to rapidly accelerate.

One of the more salient discussions of the CLOUD Act that I’ve seen lately is: “Why the CLOUD Act is Good for Privacy and Human Rights” (https://www.lawfareblog.com/why-cloud-act-good-privacy-and-human-rights). Regardless of how you feel about these issues, the article is well worth reading.

Let’s face it — nothing about the Net is simple.

–Lauren–

Why YouTube’s New Plan to Debunk Conspiracy Videos Won’t Work


YouTube continues to try figure out ways to battle false conspiracy videos that rank highly on YouTube — sometimes even into the top trending lists — and that can spread to ever more viewers via YouTube’s own “recommended videos” system. I’ve offered a number of suggestions for dealing with these issues, most recently in “Solving YouTube’s Abusive Content Problems — via Crowdsourcing” (https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing).

YouTube has now announced a new initiative that they’re calling “information cues” — which they hope will address some of these problems.

Unfortunately, this particular effort (at least as being reported today) is likely doomed to be almost entirely ineffective.

The idea of “information cues” is to provide false conspiracy YouTube videos with links to Wikipedia pages that “debunk” those conspiracies. So, for example, a video claiming that the Florida student shooting victims were actually “crisis actors” would presumably show a link to a Wikipedia page that explains why this wasn’t actually the case.

You probably already see the problems with this approach.

We’ll start with the obvious elephant in the room. The kind of viewers who are going to believe these kinds of false conspiracy videos are almost certainly going to say that the associated Wikipedia articles are wrong, that they’re planted lies. FAKE NEWS!

Do we really believe that anyone who would consider giving such videos even an inch of credibility is going to be convinced otherwise by Wikipedia pages? C’mon! If anything, such Wikipedia pages may actually serve to enforce these viewers’ beliefs in the original false conspiracy videos!

Not helping matters at all is that Wikipedia’s reputation for accuracy — never all that good — has been plunging in recent years, sometimes resulting in embarrassing Knowledge Panel errors for Google in search results.

Any Wikipedia page that is not “protected” — that is, where the ordinary change process has been locked out — is subject to endlessly mutating content editing wars — and you can bet that any editable Wikipedia pages linked by YouTube from false conspiracy videos would become immediate high visibility targets for such attacks.

If there’s one thing that research into this area has already shown quite conclusively, it’s that the people who believe these kinds of garbage conspiracy theories are almost entirely unconvinced by any factual information that conflicts with their inherent points of view.

The key to avoiding the contamination caused by these vile, lying, false conspiracy videos is to minimize their visibility in the YouTube/Google ecosystem in the first place.

Not only should they be prevented from ever getting into the trending lists, they should be deranked, demonetized, and excised from the YouTube recommended video system. They should be immediately removed from YouTube entirely if they contain specific attacks against individuals or other violations of the YouTube Terms of Service and/or Community Guidelines. These actions must be taken as rapidly as possible with appropriate due diligence, before these videos are able to do even more damage to innocent parties.

Nothing less can keep such disgusting poison from spreading.

–Lauren–

Solving YouTube’s Abusive Content Problems — via Crowdsourcing


We all know that the long knives are out by various governments regarding YouTube content. We know that Google is significantly increasing the number of workers who will review YT abuse reports.

But we also know that the volume of videos in the uploading firehose is going to continue leaving very large numbers of abusive videos online that may quickly achieve high numbers of views, even if YT employed techniques that I’ve previously urged, such as human review of videos that are about to go onto the trending lists before they actually do so.

This scale of videos is enormous — yet the scale of viewing users is also very large.

Is there some way to leverage the latter to help deal with abusive content in the former, as a proactive effort to help keep government censorship of YT at bay?

YT already has a “Trusted Flaggers” program that gives abuse review priority to videos that these users have flagged. But (as far as I know) this only applies to videos that these users have happened to find and see of their own volition. 

I don’t have the hard data to prove this, but I have a strong suspicion that vast numbers of users would be willing to participate as organized volunteer proactive “screeners” of a sort for YT, especially if there was some even minor financial incentive for their participation (think in terms of a small amount of Play Store credit, for example).

What if public videos that were suddenly attracting significant numbers of views (“significant” yet to be defined), were pushed to some odd number (to avoid ties) of such volunteer viewers who have undergone appropriate online training regarding YT’s Terms of Use? We require that they actually are viewing reasonable amounts of these videos (yes, there would be ways to attempt gaming this, but remember we’re talking about very large numbers of volunteers so much of that risk should wash out if care is used in tracking analysis).

They vote/rate the videos acceptable or not. If the majority vote a video as unacceptable, it gets pushed to the formal G abuse screeners for a decision. If any given volunteer is found over time to be providing bad decisions, they’re dropped from the program.

Most videos would have small enough numbers of views to never even enter this system. But it would provide a middle ground to help deal with videos that are suddenly getting more visibility *before* they can cause big problems, and this technique doesn’t rely on random viewers taking the initiative to flag abusive videos (and for that matter figuring out how to flag them, since flagging is not typically a top level YT user interface element these days, as I’ve previously noted).

Since participants in this program would not have any control over which specific videos they’d be pushed for a vote, and since again we’d be talking about quite large numbers of participants (and we’d be monitoring their performance over time), the ability to purposely claim that nonabusive videos were abusive (or the reverse) would be minimized.

No video would have action taken against it unless it had also been declared abusive by a regular YT screener in the pipeline after the volunteer screeners down-voted a video — providing even more protection.

How to define abusive videos is of course a separate discussion relating directly to the YT Terms of Service, but this could include the kinds of content violations that we all know about in relation to YT (hate speech, dangerous pranks and dares, threats, etc.), and even areas such as obvious obnoxious Content ID evasions (e.g., program/movie video inset boxes against random backgrounds, artificial program run time variations, and so on).

I do realize that this is a fairly radical concept and that there are all manner of details that aren’t considered in this brief summary. But I am increasingly convinced that it’s going to take some sort of new approach to help deal with these problems proactively, and to help forestall governments from moving in and wrecking the wonderful YouTube ecosystem with escalating politically motivated demands and threats.

–Lauren–

The Ethics of Google and the Pentagon Drones

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

UPDATE (May 31, 2018): Google and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

– – –

Many years ago, I was the systems guy who ran the early UNIX minicomputers in the basement of Santa Monica’s RAND Corporation. While RAND at the time derived the vast majority of its income from Department of Defense contracts, I was there despite my lifelong refusal to work directly on military-related projects (to the significant detriment of my own income, I might add). RAND spoke truth to power. DoD could contract with RAND for a report on some given topic, but RAND wouldn’t skew a report to reach results that the contractor had hoped for. I admired that.

One midday I was eating lunch in an open patio between the offices there, chatting with a couple of the military research guys. At the time, one focus of DoD interest was use of mainframe and minicomputer systems to analyze battlefield data, such as it was back then. My lunchmates assured me that their work was all defensive in nature.

I asked how they could be sure that the same analytical systems they intended for defense couldn’t also by used by the military for actually killing people. “We have to trust them,” came the reply. “The technology is inherently dual use.”

It seemed to me that battlefield data analysis was fundamentally different from the DoD-funded projects I also worked on — with ARPANET being the obvious example. Foundational communications research is not in the same category as calculating how to more efficiently kill your enemy. At least that’s how I felt at the time, and I still feel that way. There’s nothing inherently evil in accepting money from DoD — the ethical issues revolve around the specifics of the projects involved.

Fast forward to the controversy that has arisen today, about which I’ve been flooded with queries — word that Google has been engaged in “Project Maven” for DoD, using Google AI/Machine Learning tech to analyze footage from military drones. Apparently this wasn’t widely known even internally at Google, until the topic recently found its way to internal discussion groups and then leaked to the public. Needless to say, there reportedly has been quite considerable internal controversy about this, to say the least.

“How do you feel about this, Lauren?” I’m being asked.

Since I frequently play armchair ethicist, I’ve been giving this question a lot of thought today.

The parallels with that lunch discussion at RAND so long ago seem striking.  The military wanted to analyze battlefield data back then, and they want to analyze military drone data now.

There are no simple answers.

But we can perhaps begin with the problem of innocent civilian deaths resulting from U.S. drone strikes. We know that the designated terrorist targets are frequently purposely embedded in civilian areas, and often travel with civilians who have little or no choice in the matter — such as children and other family members.

While the Pentagon (as they did during the Vietnam war) makes a grand show about body counts, it’s not clear that most of these drone strikes have much long-term anti-terrorism impact. The targets are frequently fungible — kill one leader and another moves right in. Liquidate one bomb maker and the position is quickly filled by another.

So, ethical question #1: Are these drone strikes justifiable at all? To answer this question honestly, we must of course consider the rate of collateral civilian deaths and injuries, which are sure to inspire further anti-U.S. rhetoric and attacks.

My personal belief is that in most cases — at least to the extent that we in the public are aware — the answer to this question is generally no.

Which brings us to ethical question #2 (or rather, a set of questions): Does supplying advanced image processing and analysis systems for military drone data fall into an ethically acceptable category, provided that such analysis is not specifically oriented toward targeting for lethal operations? Can it be reasonably argued that more precise targeting could also help to prevent civilian casualties, even when those civilians are in immediate proximity to the intended targets? Or is providing such facilities also ethical even if direct lethal operations are known in advance to be the likely result, toward the advancement of currently stated U.S. interests?

And after all, much of our technology today can be easily repurposed in ways that we technologists had not intended — for example, for oppressive governments to surveil and censor their own citizens.

Yet the immense potential power of rapidly advancing AI and Machine Learning systems do cast these kinds of issues in a new and qualitatively different kind of light. And that’s even if we leave aside a business-based analysis that some firms might make, noting that if they don’t provide the services, some other company will do so anyway, and get the contracts as well as the income.

I know absolutely nothing about Google’s participation in Project Maven other than what I’ve seen in public sources today.

But to try address the gist of my own questions from just above, based on what I know right now, I believe that Google has a significant ethical quandary on their hands in this regard.

I personally doubt that this kind of powerful tech can be constrained through contractual relationships to purely defensive use. I also feel that the decision regarding whether or not any given firm is willing to accept that its technology may be used for lethal purposes is one that should be made “eyes wide open” —  and is worthy of nothing less than effectively a significant level of company-wide consensus before proceeding.

It has been ages since I even thought about that long ago lunch conversation at RAND. It’s indeed disquieting to be thinking about it again today.

Be seeing you.

–Lauren–

Why I Finally Dumped Netflix (and Love FilmStruck/Criterion)

UPDATE (November 16, 2018):  New, Independent Criterion Channel to Launch Spring 2019

– – –

UPDATE (October 26, 2018): Warner Media — controlled by those sick bastards at AT&T since the horrific merger  — are shutting down FilmStruck on November 29th. AT&T: Always finding new ways to enrich ourselves and screw you. Thank you for using AT&T!

– – –

Yesterday was my last day subscribing to Netflix. Miss them, I will not. I had been meaning to kill the subscription for quite some time, finally pulled the trigger a couple of weeks ago, and the final days ran out at the end of February.

It’s been painful to watch Netflix’s escalating deterioration and hubris. After arguably putting movie rental stores out of business almost single-handedly, Netflix decided that they no longer really cared about classic films.

Netflix CEO Reed Hastings wants to play Hollywood movie mogul for himself. So Netflix has been decimating its online catalog of classic, quality films, and replacing them with a cavalcade of mediocre productions. Their corpus of classic television has been going in the same direction for ages now.

What’s more, Netflix is spending billions of dollars — reportedly $8 billion just this year — to produce its own stream of mostly unwatchable films and series — which they continuously promote through app screensavers and in every other way possible.

It’s gotten to the point that whenever you hear the characteristic loud “thum thum!” that precedes a Netflix production, you know it’s time to move on.

That’s not to say that Netflix doesn’t occasionally produce a quality film or show — but the ratio is awful, and seems to be mostly of the “stopped clock is correct twice a day” variety.

Their “You might like this, Lauren!” recommendations stink. You can dig through their online listings for ages and find nothing even remotely worth your time.

Bye bye Netflix.

Luckily for those of us who care about classic films and quality films in general, there’s a superb online alternative —FilmStruck/Criterion:

https://www.filmstruck.com

FilmStruck is a service of Turner Broadcasting, who also produce the always excellent Turner Classic Movies (TCM) channel, of which I’ve been a fan since its inception many years ago. 

I subscribed to FilmStruck (and their wonderful Criterion Collection add-on) some weeks ago, around the same time that I issued my Netflix cancellation (Netflix vis-a-vis FilmStruck/Criterion pricing are both very similar, by the way). 

One of the best entertainment-related decisions I’ve ever made.

It would be fair to call F/C something of a TCM on super-steroids (and in fact, F/C has just now begun to integrate some new F/C intros from TCM hosts, and classic materials from the TCM archives — super!)

Are there downsides? Well, in all honesty F/C’s website is pretty slow and clunky. Their device apps need significant work. While you can run three simultaneous video streams, there’s no mechanism for separate users per se. 

I don’t care. All of that logistical stuff will certainly improve with time. 

Once the video streams are running they look great. Films are in HD whenever possible and are in reasonable aspect ratios. There are no “ID bugs” on the screen during films (and here I’ll also note that TCM has always had a policy of keeping their ID bugs to an absolute minimum — just a few seconds at a time occasionally during films, which is also very much appreciated).

The depth and breadth of F/C’s superb classic and independent films online catalog are breathtaking.

But there’s a lot more there than the individual movies. There are curated collections of films. Often there are all manner of “extras” — not only the kinds of additional materials familiar from DVDs like commentary tracks, discussions, and other original features, but F/C-produced materials as well.

It really is a classic film lover’s paradise.

What’s more, a few days ago it was announced that Warner Bros. was shutting down their own standalone streaming service, and transferring their vast library of hundreds of classic films to F/C — some of those have already become available and they’re great. I started into them yesterday with “Forbidden Planet” and “Casablanca” — and that’s just barely scratching the surface, of course.

Anyway, you get the idea. If you’re happy with the kind of putrid porridge that has become Netflix’s stock-in-trade these days, more power to you — enjoy.

But if you care about great films, about classic films — I urge you to give FilmStruck/Criterion a try (there’s a 14 day free trial, and you can view via a range of mobile and streaming devices, including Chromecast, Roku, etc.)

Sorry Netflix. That’s show biz!

–Lauren–

A Proposal to Google: How to Stop Evil from Trending on YouTube


Late last year, in the wake of the Las Vegas shooting tragedy (I know, keeping track of USA mass shootings is increasingly difficult when they’re increasingly common) I suggested in general terms some ways that YouTube could avoid spreading disinformation and false conspiracy theories after these kinds of events:

“Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news

I’ve also expressed concerns that YouTube’s current general user interface does not encourage reporting of hate or other abusive videos:

“How YouTube’s User Interface Helps Perpetuate Hate Speech” – https://lauren.vortex.com/2017/03/26/how-youtubes-user-interface-helps-perpetuate-hate-speech

Now, here we are again. Another mass shooting. Another spew of abusive, dangerous, hateful, false conspiracy and other related videos on YouTube that clearly violate YouTube’s Terms of Use but still managed to push high up onto YouTube trending lists — this time aimed at vulnerable student survivors of the Florida high school tragedy of around a week ago.

Google has stated that the cause for one of the worst of these reaching top trending YouTube status was an automated misclassification due to an embedded news report, that “tricked” YouTube’s algorithms into treating the entire video as legitimate.

No algorithms are perfect, and YouTube’s scale is immense. But this all begs the question — would a trained human observer have made the same mistake?

No. It’s very unlikely that a human who had been trained to evaluate video content would have been fooled by such an embedding technique.

Of course as soon as anyone mentions “humans” in relation to analysis of YouTube videos, various questions of scale pop immediately into focus.  Hundreds of hours of content are uploaded to YouTube every minute. YouTube’s scope is global, so this data firehose includes videos concerning pretty much any conceivable topic in a vast array of languages.

Yet Google is not without major resources in these regards. They’ve publicly noted that they have significantly-sized teams to review videos that have been flagged by users as potentially abusive, and have announced that they are in the process of expanding those teams.

Still, the emphasis to date has seemed to be on removing abusive videos “after the fact” — often after they’ve already quickly achieved enormous view counts and done significant damage to victims.

A more proactive approach is called for.

One factor to keep in mind is that while very large numbers of videos are continuously pouring into YouTube, the vast majority of these will never quickly achieve high numbers of views. These are what comprise the massive “long tail” of YouTube videos.

Conversely, at any given time only a relative handful of videos are trending “viral” and accumulating large numbers of views in very short periods of time.

While any and all abusive videos are of concern, as a practical matter we need to direct most of our attention to those trending videos that can do the most damage the most quickly.  We must not permit the long tail of less viewed videos to distract us from promptly dealing with abusive videos that are currently being seen by huge and rapidly escalating numbers of viewers.

YouTube employees need to be more deeply “in the loop” to curate trending lists much earlier in the process.

As soon as a video goes sufficiently viral to technically “qualify” for a trending list, it should be immediately pushed to humans — to the YouTube abuse team — for analysis before the video is permitted to actually “surface” on any of those lists.

If the video isn’t abusive or otherwise in violation of YouTube rules, onto the trending list it goes and it likely won’t need further attention from the team. But if it is in violation, the YouTube team would proactively block it from ever going onto trending, and would take other actions related to that video as appropriate (which could include removal from YouTube entirely, strikes or other actions against the uploading YouTube account, and so on).

There simply is no good reason today for horrifically abusive videos appearing on YouTube trending lists, and even worse in some cases persisting on those lists for hours, even rising to top positions — giving them enormous audiences and potentially doing serious harm.

Yes, fixing this will be significant work.

Yes, this won’t be cheap to do.

And yes, I believe that Google has the capabilities to accomplish this task.

The dismal alternative is the specter of knee-jerk, politically-motivated censorship of YouTube by governments, actions that could effectively destroy much of what makes YouTube a true wonder of the world, and one of my all-time favorite sites on the Internet.

–Lauren–

Why the Alt-Right Loves Google’s Diversity Conundrum


Google seems to be taking hits from all sides these days, and the announcement of another “diversity” lawsuit directed at the firm by an ex-employee only adds to the escalating mix.

The specific events related to these suits all postdate my consulting inside Google some years ago, but I know a lot of Googlers — among the best people I know, by the way — and I still have a pretty good sense of how Google’s internal culture functions.

Google is in a classic “damned if you do and damned if you don’t” position right now, exacerbated by purely political forces (primarily of the alt-right) that are attempting to leverage these situations to their own advantage — and ultimately to the disadvantage of Google, Google’s users, and the broader community at large.

This all really began with Google’s completely justified firing of alt-right darling James Damore after he internally promulgated what is now widely known as his “anti-diversity” memo.

The crux of the matter — as I see it, anyway — is that while Google’s internal discussion culture is famously vibrant and open (I can certainly attest to that myself!) — Google still has a corporate and ethical responsibility to provide a harassment-free workplace. That’s why Damore’s memo resulted in his termination.

But “harassment” (at least in a legal sense) doesn’t necessarily only apply to one side of these arguments.

To put this into more context, I need only think of various corporate environments that I’ve seen over my career, where it would have been utterly unthinkable to have the level of open discussion that is not only permitted by Google but encouraged there. At many firms today, Google’s internal openness in this regard would still be prohibited.

Many Googlers have never experienced such more typical corporate workplaces where open discussion of a vast range of topics is impractical or prohibited.

Yet even in an open discussion environment like Google’s, there have to be some limits. This is particularly true with personnel issues like diversity, that not only involve complex legal matters, but can be extremely sensitive personally to individual employees as well.

The upshot of all this — in my opinion — is that “public” internal personnel discussions per se are generally inappropriate for any corporate environment given the current legal and toxic political landscapes, especially with evil forces ready and willing to latch onto any leaks to further their own destructive agendas, e.g. as I discussed in “How the Alt-Right Plans to Control Google” — https://lauren.vortex.com/2017/09/29/how-the-alt-right-plans-to-control-google — and in other posts.

Personnel matters are much better suited to direct and private communications with corporate HR than for widely viewed internal discussion forums.

This isn’t a happy analysis for me. Most of us either know victims of harassment or have been harassed one way or another ourselves. And it’s clear that the kinds of harassment most in focus today are largely being encouraged by alt-right perpetrators, up to and including the sociopath currently in the Oval Office.

But in the long run, acting compulsively on our gut instincts in these regards — however noble those instincts may be — can be positively disastrous to our attempts to stop harassment and other evils. How and where these discussions take place can be fully as important as the actual contents of the discussions themselves. Insisting on such discussions within inappropriate environments, especially when complicated laws and “go for the jugular” external politics can be involved, is typically very much a losing tactic.

Overall, I believe that Google is handling this situation in pretty much the best ways that are actually possible today.

–Lauren–

“How-To” Videos — The Unsung Heroes of YouTube!


With so much criticism lately being directed at the more “unsavory” content on YouTube that I’ve discussed previously, it might be easy to lose track of why I’m still one of YouTube’s biggest fans.

Anyone could be forgiven for forgetting that despite highly offensive or even dangerous videos on YouTube that can attract millions of views and understandable public scrutiny, there are many other types of YT videos that attract much less attention but collectively do an incalculably large amount of good.

One example is YT’s utterly enormous collection of legitimate and incredibly helpful “How-To” videos — covering a breathtaking array of topics.

I’m not referring here to “formal” education videos — though these are also present in tremendous numbers and are usually very welcome indeed. Nor am I just now discussing product installation and similar videos often posted by commercial firms — though these are also often genuinely useful.

Rather, today I’d like to highlight the wonders of “informal” YT videos that walk viewers through the “how-to” or other explanatory steps regarding pretty much any possible topic involving computers, electronics, plumbing, automotive, homemaking, hobbies, sports — seemingly almost everything under the sun.

These videos are typically created by a cast and crew of one individual, often without any formal on-screen titles, background music, or other “fancy” production values.

It’s not uncommon to never see the faces of these videos’ creators. Often you’ll just see their hands at a table or workbench — and hear their informal voice narration — as they proceed through the learning steps of whatever topic that they wish to share.

These videos tend with remarkable frequency to begin with the creator saying “Hi guys!” or “Hey guys!” — and often when you find them they’ll only have accumulated a few thousand views or even fewer.

I’ve been helped by videos like these innumerable times over the years, likely saving me thousands of dollars and vast numbers of wasted hours — permitting me to accomplish by myself projects that otherwise would have been expensive to have done by others, and helping me to avoid costly repair mistakes as well.

To my mind, these kinds of “how-to” creators and their videos aren’t just among the best parts of YouTube, but they’re also shining stars that represent much of what we many years ago had hoped the Internet would grow into being.

These videos are the result of individuals simply wanting to share knowledge to help other people. These creators aren’t looking for fame or recognition — typically their videos aren’t even monetized.

These “how-to” video makers are among the very best not only of YouTube and of the Internet — but of humanity in general as well. The urge to help others is among our species’ most admirable traits — something to keep in mind when the toxic wasteland of Internet abuses, racism, politicians, sociopathic presidents — and all the rest — really start to get you down.

And that’s the truth.

–Lauren–

Facebook’s Very Revealing Text Messaging Privacy Fail


As I’ve frequently noted, one of the reasons that it can be difficult to convince users to provide their phone numbers for account recovery and/or 2-step, multiple-factor authentication/verification login systems, is that many persons fear that the firms involved will abuse those numbers for other purposes.

In the case of Google, I’ve emphasized that their excellent privacy practices and related internal controls (Google’s privacy team is world class), make any such concerns utterly unwarranted.

Such is obviously not the case with Facebook. They’ve now admitted that a “bug” caused mobile numbers provided by users for multiple-factor verification to also be used for spamming those users with unrelated text messages. Even worse, when users replied to those texts their replies frequently ended up being posted on their own Facebook feeds! Ouch.

What’s most revealing here is what this situation suggests about Facebook’s own internal privacy practices. Proper proactive privacy design would have compartmentalized those phone numbers and associated data in a manner that would have prevented a “bug” like this from ever triggering such abuse of those numbers.

Facebook’s sloppiness in this regard has now been exposed to the entire world.

And naturally this raises a much more general concern.

What other sorts of systemic privacy design failures are buried in Facebook’s code, waiting for other “bugs” capable of freeing them to harass innocent Facebook users yet again?

These are all more illustrations of why I don’t use Facebook. If you still do, I recommend continuous diligence regarding your privacy on that platform — and lotsa luck — you’re going to need it!

–Lauren–