DeJoy Is Hell-Bent on Wrecking the Postal Service — and Maybe Your Life

While we’re all still reeling from the recent horrific, tragic. and utterly preventable incidents of mass shooting murders, inside the D.C. beltway today events are taking place that could put innumerable medically challenged Americans at deep risk — and the culprit is Louis DeJoy, the Postal Service (USPS) Postmaster General and Trump megadonor. 

His 10-year plan for destroying the USPS, by treating it like his former for-profit shipping logistics business rather than the SERVICE is was intended to be — was released today, along with a flurry of self-congratulatory official USPS tweets that immediately attracted massive negative replies, most of them demanding that DeJoy be removed from his position. Now. Right now!

I strongly concur with this sentiment.

Even as first class and other mail delays have already been terrifying postal customers dependent on the USPS for critical prescription medications and other crucial products, DeJoy’s plan envisions even longer mail delays — including additional days of delay for delivery of local first class mail, banning first class mail from air shipping, raising rates, cutting back on post office hours, and — well, you get the idea.

Fundamentally the plan is simple. Destroy the USPS via the “death by a thousand cuts” — leaving to slowly twist in the wind those businesses and individuals without the wherewithal to rely on much more expensive commercial carriers.

While President Biden has taken some initial steps regarding the USPS by appointing several new appointees to the USPS board of governors (who need to be confirmed by the Senate), and this could lead to the ability for the ultimate ousting of DeJoy (since only the board can fire him directly), we do not have the time for this process to play out.

Biden has apparently been reluctant to take the “nuclear option” of firing DeJoy’s supporters on the board — they can be fired “for cause” — but many observers assert that their complicity in this DeJoy plan to wreck USPS services would be cause enough.

One thing is for sure. The kinds of changes that DeJoy is pushing through would be expensive and time consuming to unwind later on. And in the meantime, everybody — businesses and ordinary people alike — will suffer greatly at DeJoy’s hands. 

President Biden should act immediately to take any and all legal steps to get DeJoy out of the USPS before DeJoy can do even more damage to us all.

–Lauren–

How the “News Link Wars” Could Wreck the Web

As it stands right now, major news organizations — in league with compliant politicians around the world — seem poised to use the power of their national governments to take actions that could absolutely destroy the essentially open Web, as we’ve known it since Sir Tim Berners-Lee created the first operational web server and client browser at CERN in 1990.

Australia — home of the right-wing Rupert Murdoch empire — is in the lead of pushing this nightmarish travesty, but other countries around the world are lining up to join in swinging wrecking balls at Web users worldwide. 

Large Internet firms like Facebook and Google, feeling pressure to protect their income streams more than to protect their users, are taking varying approaches toward this situation, but the end result will likely be the same in any case — users get the shaft.

The underlying problem is that news organizations are now demanding to be paid by firms like Google and Facebook merely for being linked from them. The implications of this should be obvious — it creates the slippery slope where more and more sites of all sorts around the world would demand to be paid for links, with the result that the largest, richest Internet firms would likely be the last ones standing, and competition (along with choices available to users) would wither away. 

The current situation is still in considerable flux — seemingly changing almost hour by hour — but the trend lines are clear. Google had originally taken a strong stance against this model, rightly pointing out how it could wreck the entire concept of open linking across the Web, the Web’s very foundation! But at the last minute, it seems that Google lost its backbone, and has been announcing payoff deals to Murdoch and others, which of course will just encourage more such demands. At the moment Facebook has taken the opposite approach, and has literally cut off news from their Australian users. The negative collateral effects that this move has created make it unlikely that this can be a long-term action.

But what we’re really seeing from Facebook and Google (and other large Internet firms who are likely to be joining their ranks in this respect) — despite their differing approaches at the moment — is essentially their floundering around in a kind of desperation. They don’t really want (and/or don’t know how) to address the vast damage that will be done to the overall Web by their actions, beyond their own individual ecosystems. From a profit center standpoint this arguably makes sense, but from the standpoint of ordinary users worldwide it does not.

To use the vernacular, users are being royally screwed, and that screwing has only just begun.

Some observers of how the news organizations and their government sycophants are pushing their demands have called these actions blackmail. There is one universal rule when dealing with blackmailers — no matter how much you pay them, they’ll always come back demanding more. In the case of the news link wars, the end result if the current path is continued, will be their demands for the entire Web — users be damned.

–Lauren–

The Big Lie About “Cancel Culture” and Demands to Change Section 230

Claims of “cancel culture” seems to be everywhere these days. Almost every day, we seem to hear somebody complaining that they have been “canceled” from social media, and pretty much inevitably there is an accompanying claim of politically biased motives for the action.

The term “cancel culture” itself appears to have been pretty much unknown until several years ago, and seems to have morphed from the term “call-out culture” — which ironically is generally concerned with someone getting more publicity than they desire, rather than less.

Be that as it may, cancel culture complaints — the lions’ share of which emanate from the political right wing — are now routinely used to lambaste social media and other Internet firms, to assert that their actions are based on political statements with which the firms do not agree and (according to these accusations) seek to suppress.

However, even a casual inspection of these claims suggest that the actual issues in play are hate speech, violent speech, and dangerous misinformation and disinformation — not political viewpoints, and formal studies reinforce this observation, e.g. False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives.

Putting aside for now the fact that the First Amendment does not apply to other than government actions against speech, even a cursory examination of the data reveals — confirmed by more rigorous analysis — not only that right-wing entities are overwhelmingly the source of most associated dangerous speech (though they are by no means the only source, there are sources on the left as well), but conservatives overall still have prominent visibility on social media platforms, dramatically calling into question the claims of “free speech” violations overall.

Inexorably intertwined with this are various loud, misguided, and dangerous demands for changes to (and in some cases total repeal of) Communications Decency Act Section 230, the key legislation that makes all forms of Internet UGC — User Generated Content — practical in the first place.

And here we see pretty much equally unsound proposals (largely completely conflicting with each other) from both sides of the political spectrum, often apparently based on political motives and/or a dramatic ignorance of the negative collateral damage that would be done to ordinary users if such proposals were enacted.

The draconian penalties associated with various of these proposals — aimed at Internet firms — would almost inevitably lead not to the actually desired goals of the right or left, but rather to the crushing of ordinary Internet users, by vastly reducing (or even eliminating entirely) the amount of their content on these platforms — that is, videos they create, comments, discussion forms, and everything else users want to share with others.

The practical effect of these proposals would be not to create more free speech or simply reduce hate and violent speech, misinformation and disinformation, but to make it impractical for Internet platforms to support user content — which is vast in scale beyond the imagination of most persons — in anything like the ways it is supported today. The risks would just be too enormous, and methodologies to meet the new demanded standards — even if we assume the future deployment of advanced AI systems and vast new armies of proactive moderators — do not exist and likely could never exist in a practical and affordable manner.

This is truly one of those “be careful what you wish for” moments, like asking the newly-released genie to “fix social media” and with a wave of his hand he eliminates the ability of anyone in the public — prominent or not, on the right or the left — to share their views or other content.

So as we see, complaints about social media are being driven largely by highly political arguments, but in reality invoke enormously complex technical challenges at gigantic scales — many of which we don’t even fundamentally understand given the toxic political culture of today.

As much as nobody would likely argue that Section 230 is perfect, I have yet to see any realistic proposals to change it that would not make matters far worse — especially for ordinary users who largely don’t understand how much they have to lose in these battles. 

Like democracy itself, which has been referred to as “the worst possible system of governance, except for all the others” — buying into the big lie of cancel culture and demands to alter Section 230 is wrong for the Internet and would be terrible for its users.

–Lauren–

The Challenges of Moderating User Content on the Internet (and a Bit of History)

I increasingly suspect that the days of large-scale public distribution of unmoderated UGC (User Generated Content) on the Internet may shortly begin drawing to a close in significant ways. The most likely path leading to this over time will be a combination of steps taken independently by social media firms and future legislative mandates.

Such moderation at scale may follow the model of AI-based first-level filtering, followed by layers of human moderators. It seems unlikely that today’s scale of postings could continue under such a moderation model, but future technological developments may well turn out to be highly capable in this realm.

Back in 1985 when I launched my “Stargate” experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national “Superstation WTBS,” I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet. My main related concerns back then did not involve hate speech or violent speech — which were not significant problems on the Net at that point — but human nature being what it is I felt that the situation was likely to get much worse rather than better.

What I had largely forgotten in the decades since then though, until I did a Google search on the topic today (a great deal of original or later information on Stargate is still online, including various of my relevant messages in very early mailing list archives that will likely long outlive me), is the level of animosity about that decision that I received at the time. My determination for Stargate to only carry moderated groups triggered cries of “censorship,” but I did not feel that responsible moderation equated with censorship — and that is still my view today.

And now, all these many years later, it’s clear that we’ve made no real progress in these regards. In fact, the associated issues of abuse of unmoderated content in hateful and dangerous ways makes the content problems that I was mostly concerned about back then seem like a soap bubble popping, compared with a nuclear bomb detonating now.

We must solve this. We must begin serious and coordinated work in this vein immediately. And my extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage.

Time is of the essence.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab (repost with new introduction)

The post below was originally published on 10 August 2019. In light of recent events, particularly the storming of the United States Capital by a violent mob — resulting in five deaths — and subsequent actions by major social media firms relating to the exiting President Donald Trump (terms of service enforcement actions by these firms that I do endorse under these extraordinary circumstances), I feel that the original post is again especially relevant. While the threats of moves by the Trump administration against  CDA Section 230 are now moot, it is clear that 230 will be a central focus of Congress going forward, and it’s crucial that we all understand the risks of tampering with this key legislation that is foundational to the availability of responsible speech and content on the Internet. –Lauren–

– – – – – – – – –  –

The Right’s (and Left’s) Insane Internet Content Power Grab
(10 August 2019)

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Recommendation: Do Not Install or Use Centralized Server Coronavirus (COVID-19) Contact Tracing Apps

Everyone, I hope you and yours are safe and well during this unprecedented pandemic.

As I write this, various governments are rushing to implement — or have already implemented — a wide range of different smartphone apps purporting to be for public health COVID-19 “contact tracing” purposes. 

The landscape of these is changing literally hour by hour, but I want to emphasize MOST STRONGLY that all of these apps are not created equal, and that I urge you not to install various of these unless you are required to by law — which can indeed be the case in countries such as China and Poland, just to name two examples.

Without getting into deep technical details here, there are basically two kinds of these contact tracing apps. The first is apps that send your location or other contact-related data to centralized servers (whether the data being sent is claimed to be “anonymous” or not). Regardless of promised data security and professed limitations on government access to and use of such data, I do not recommend voluntarily choosing to install and/or use these apps under any circumstances.

The other category of contact tracing apps uses local phone storage and never sends your data to centralized servers. This is by far the safer category in which resides the recently announced Apple-Google Bluetooth contact tracing API, being adopted in some countries (including now in Germany, which just announced that due to privacy concerns it has changed course from its original plan of using centralized servers). In general, installing and using these local storage contact tracing apps presents a vastly less problematic and far safer situation compared with centralized server contact tracing apps.

Even if you personally have 100% faith that your own government will “do no wrong” with centralized server contact tracing apps — either now or in the future under different leadership — keep in mind that many other persons in your country may not be as naive as you are, and will likely refuse to install and/or use centralized server contact tracing apps unless forced to do so by authorities.

Very large-scale acceptance and use of any contact tracing apps are necessary for them to be effective for genuine pandemic-related public health purposes. If enough people won’t use them, they are essentially worthless for their purported purposes.

As I have previously noted, various governments around the world are salivating at the prospect of making mass surveillance via smartphones part of the so-called “new normal” — with genuine public health considerations as secondary goals at best.

We must all work together to bring the COVID-19 disaster to an end. But we must not permit this tragic situation to hand carte blanche permissions to governments to create and sustain ongoing privacy nightmares in the process. 

Stay well, all.

–Lauren–

Coronavirus Reactions Creating Major Internet Security Risks

As vast numbers of people are suddenly working from home in reaction to the coronavirus pandemic, doctors switch to heavy use of video office visits, and in general more critical information than ever is suddenly being thrust onto the Internet, the risks of major security and privacy disasters that will long outlast the pandemic are rising rapidly. 

For example, the U.S. federal government is suspending key aspects of medical privacy laws to permit use of “telemedicine” via commercial services that have never been certified to be in compliance with the strict security and privacy rules associated with HIPAA (Health Insurance Portability and Accountability Act). The rush to provide more remote access to medical professionals is understandable, but we must also understand the risks of data breaches that once having occurred can never be reversed.

Sloppy computer security practices that have long been warned against are now coming home to roost, and the crooks as usual are way ahead of the game.  

The range of attack vectors is both broad and deep. Many firms have never prepared for large-scale work at home situations, and employees using their own PCs, laptops, phones, or other devices to access corporate networks can represent a major risk to company and customer data. 

Fake web sites purporting to provide coronavirus information and/or related products are popping up in large numbers around the Net, all with nefarious intents to spread malware, steal your accounts, or rob you in other ways.

Even when VPNs (Virtual Private Networks) are in use, malware on employee personal computers may happily transit VPNs into corporate networks. Commercial VPN services introduce their own risk factors, both due to potential flaws in their implementations and the basic technical limitations inherent in using a third-party service for such purposes. Whenever possible, third-party VPN services are to be avoided by corporate users, and these firms and other organizations using VPNs should deploy “in-house” VPN systems if they truly have the technical expertise to do so safely.

But far better than VPNs are “zero trust” security models such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), that can provide drastically better security without the disadvantages and risks of VPNs.

There are even more basic issues in focus. Most users still refuse to enable 2-factor (aka “2-step”) verification systems (https://www.google.com/landing/2step/) on services that support it, putting them at continuous risk of successful phishing attacks that can result in account hijacking and worse. 

I’ve been writing about all of this for many years here in this blog and in other venues. I’m not going to make a list here of my many relevant posts over time — they’re easy enough to find. 

The bottom line is that the kind of complacency that has been the hallmark of most firms and most users when it comes to computer security is even less acceptable now than ever before. It’s time to grow up, bite the bullet, and expend the effort — which in some cases isn’t a great deal of work at all! — to secure your systems, your data, and yes, your life and the lives of those that you care about.

Stay well.

–Lauren–

Iowa Screams: Don’t Trust High-Tech Elections!

For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!

We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.

Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!

Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process. 

In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.

Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.

Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment. 

It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?

We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.

Make your choice.

–Lauren–

How Some Software Designers Don’t Seem to Care About the Elderly

One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.

Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going. 

Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.

Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.

This is a difficult state of affairs, to say the least.

There’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.

They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function. 

We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.

However, we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.

A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.

Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.

I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. Yet what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!

This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.

Yet this morning we engaged in the following tweet thread:

Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)

Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!

Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.

Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.

Somewhere around this point, he closed down the dialogue by blocking me on Twitter.

This was indeed his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.

Of course his anti-vaxx comparison is inherently flawed. There are a variety of programs to help people — who can’t otherwise afford important vaccinations — to receive them. By comparison, vast numbers of elderly persons (often living in isolation) are on their own when dealing with their computers.

The world will keep spinning after Discourse drops IE support.

Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world. 

And that’s an unnecessary tragedy.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Another Breach: What Capital One Could Have Learned from Google’s “BeyondCorp”

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with Capital One, the bank whose credit card customers and card applicants were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates traditional firewalls (and in nearly all instances, VPNs) because there is no longer any need for such devices or systems that, once breached, give an attacker access to internal goodies.

If Capital One had been following BeyondCorp principles, there’d likely be 100+ million fewer potentially panicky people today.

–Lauren–

Earthquakes vs. Darth Vader

When the Ridgecrest earthquake reached L.A. yesterday evening (no damage this far from the epicenter from that quake or the one the previous day) I was “in” a moving elevator under attack in the “Vader Immortal” Oculus Quest VR simulation. I didn’t realize that there was a quake at all, everything seemed part of the VR experience (haptic feedback in the hand controllers was already buzzing my arms at the time).

The only oddity was that I heard a strange clinking sound, that at the time had no obvious source but that I figured was somehow part of the simulation. Actually, it was probably the sound of ceiling fan knob chains above me hitting the glass light bulb fixtures as the fan was presumably swaying a bit.

Quakes of this sort are actually very easy to miss if you’re not sitting or standing quietly (I barely felt the one the previous day and wasn’t immediately sure that it was a quake), but I did find my experience last night to be rather amusing in retrospect.

By the way, “Vader Immortal” — and the Quest itself — are very, very cool, very much 21st century “sci-fi” tech finally realized. My thanks to Oculus for sending me a Quest for my experiments.

–Lauren–

YouTube’s Public Videos Dilemma

So there’s yet another controversy surrounding YouTube and videos that include young children — this time concerns about YouTube suggesting such videos to “presumed” pedophiles.

We can argue about what YouTube should or should not be recommending to any given user. There are some calls for YT to not recommend such videos when it detects them (an imperfect process) — though I’m not convinced that this would really make much difference so long as the videos themselves are public.

But here’s a more fundamental question:

Why the hell are parents uploading videos of young children publicly to YouTube in the first place?

This is of course a subset of a more general issue — parents who apparently can’t resist posting all manner of photos and other personal information about their children in public online forums, much of which is going to be at the very least intensely embarrassing to those children when they’re older. And the Internet rarely ever forgets anything that was ever public (the protestations of EU politicians and regulators notwithstanding).

There are really only two major possibilities concerning such video uploads. Either the parents don’t care about these issues, or they don’t understand them. Or perhaps both.

Various apps and web pages exist that will automatically display YT videos that have few or no current views from around the world. There’s an endless stream of these. Thousands. Millions? Typically these seem as if they have been automatically uploaded by various camera and video apps, possibly without any specific intentions for the uploading to occur. Many of these involve schools and children.

So a possible answer to my question above may be that many YT users — including parents of young children — are either not fully aware of what they are uploading, or do not realize that the uploads are public and are subject to being suggested to strangers or found by searching. 

This leads us to another question. YT channel owners already have the ability to set their channel default privacy settings and the privacy settings for each individual video. 

Currently those YT defaults are initially set to public.

Should YT’s defaults be private rather than public?

Looking at it from a user trust and safety standpoint, we may be approaching such a necessity, especially given the pressure for increased regulatory oversight from politicians and governments, which in my opinion is best avoided if at all possible.

These questions and their ramifications are complex to say the least.

Clearly, default channel and videos privacy would be the safest approach, ensuing that videos would typically only be shared to specific other users deemed suitable by the channel owner. 

All of the public sharing capabilities of YT would still be present, but would require the owner to make specific decisions about the channel default and/or individual video settings. If a channel owner wanted to make some or all of their videos public — either to date or also going forward, that would be their choice. Full channel and individual videos privacy would only be the original defaults, purely as a safety measure.

Finer-grained settings might also be possible, not only including existing options like “unlisted” videos, but also specific options to control the visibility of videos and channels in search and suggestions.

Some of the complexities of such an approach are obvious. More controls means the potential for more user confusion. Fewer videos in search and suggestions limits visibility and could impact YT revenue streams to both Google and channel owners in complex ways that may be difficult to predict with significant accuracy.

But in the end, the last question here seems to be a relatively simple one. Should any YouTube uploaders ever have their videos publicly available for viewing, search, or suggestions if that was not actually their specific and informed intent?

I believe that the answer to that question is no.

Be seeing you.

–Lauren–

A Major New Privacy-Positive Move by Google

Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:

The Google Page That Google Haters Don’t Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google’s “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.

The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.

This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.

Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow. 

My kudos to the Google teams involved!

–Lauren–

Could AI Help Prevent Mass Shootings?

Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.

A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.

It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.

“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.

We can relatively easily list some of the factors that would need to be considered in these respects.

What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?

Obviously — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.

It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.

The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.

To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.

–Lauren–

Pressuring Google’s AI Advisory Panel to Wear a Halo Is Very Dangerous

UPDATE (April 4, 2019): Google has announced that due to the furor over ATEAC (their newly announced external advisory panel dealing with AI issues), they have dissolved the panel entirely. As I discuss in the original post below, AI is too important for our typical political games — and closed-minded unwillingness to even listen to other points of view — to hold sway, and such panels are potentially an important part of the solution to that problem. As I noted, I disagree strenuously with the views of the panel member (and their own organization) that was the focus of the intense criticism that apparently pressured Google into this decision, but I fear that an unwillingness to permit such organizations to even be heard at all in such venues will come back to haunt us mightily in our toxic political environment.

 – – –

Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).

A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels. 

Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week. 

Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.

This is all extraordinarily worrisome. 

While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.

Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.

“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.

Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits. 

The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.

I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.

We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice. 

AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.

–Lauren–

Don’t Blame YouTube and Facebook for Hate Speech Horrors

Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto. 

While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.

Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.

Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.

In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.

Meanwhile, other demands being bandied about are equally specious.

Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.

Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications. 

Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.

Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.

But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem. 

Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.

In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).

The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?

The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites. 

They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.

You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.

The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.

The battle lines are drawn. 

–Lauren–

As Google’s YouTube Battles Evil, YouTube Creators Are at a Crossroads

UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube

 – – –

For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!

The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.

YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.

And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.

YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.

In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.

I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.

I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos. 

This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.

YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.

My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.

In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.

Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.

There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos. 

It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.

I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.

To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.

The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.

While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.

This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.

A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible. 

Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict. 

Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.

There are a variety of other possible approaches as well.

It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.

–Lauren–

Another Positive Move by YouTube: No More General “Conspiracy Theory” Suggestions

A few weeks ago, I noted the very welcome news that Google’s YouTube is cracking down on the presence of dangerous prank and dare videos, rightly categorizing them as potentially harmful content no longer permitted on the platform. Excellent.

Even more recently, YouTube announced a new policy regarding the category of misleading and clearly false “conspiracy theory” videos that would sometimes appear as suggested videos.

Quite a few folks have asked me how I feel about this newer policy, which aims to prevent this category of videos from being suggested by YouTube’s algorithms, unless a viewer is already subscribed to the YouTube channels that uploaded the videos in question.

The policy will take time to implement given the significant number of videos involved and the complexities of classification, but I feel that overall this new policy regarding these videos is an excellent compromise.

If you’re a subscriber to a conspiracy video hosting channel, conspiracy videos from that channel would still be suggested to you.

Otherwise, if you don’t subscribe to such channels, you could still find these kinds of videos if you purposely search for them — they’re not being removed from YouTube.

A balanced approach to a difficult problem. Great work!

–Lauren–

Another Massive Google User Trust Failure, As They Kill Louisville Fiber on Short Notice

It’s getting increasingly difficult to keep up with Google’s User Trust Failures these days, as they continue to rapidly shed “inconvenient” users faster than a long-haired dog. I do plan a “YouTube Live Chat” to discuss these issues and other Google-related topics, tentatively scheduled for Tuesday, February 12 at 10:30 AM PST. The easiest way to get notifications about this would probably be to subscribe to my main YouTube channel at: https://www.youtube.com/vortextech (be sure to click on the “bell” after subscribing if you want real time notifications). I rarely promote the channel but it’s been around for ages. Don’t expect anything fancy.

In the meantime, let’s look at Google’s latest abominable treatment of users, and this time it’s users who have actually been paying them with real money!

As you probably know, I’ve recently been discussing Google’s massive failures involving the shutdown of Google+ (“Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening” – https://lauren.vortex.com/2019/02/04/google-users-panic-over-google-deletion-emails-heres-whats-actually-happening).

Google has been mistreating loyal Google users — among the most loyal that they have and who often are decision makers about Google commercial products — in the process of the G+ shutdown on very short notice.

One might think that Google wouldn’t treat their paying customers as badly — but hey, you’d be wrong.

Remember when Google Fiber was a “thing” — when cities actually competed to be on the Google Fiber deployment list? It’s well known that incumbent ISPs fought against Google on this tooth and nail, but there was always a suspicion that Google wasn’t really in this for the long haul, that it was really more of an experiment and an effort to try jump start other firms to deploy fiber-based Internet and TV systems.

Given that the project has been downsizing for some time now, Google’s announcement today that they’re pulling the plug on the Louisville Google Fiber system doesn’t come as a complete surprise.

But what’s so awful about their announcement is the timing, which shows Google’s utter contempt for their Louisville fiber subscribers, on a system that only got going around two years ago.

Just a relatively short time ago, in August 2018, Google was pledging to spend the next two years dealing with the fiber installation mess that was occurring in their Louisville deployment areas (“Google Fiber announces plan to fix exposed fiber lines in the Highlands” – https://www.wdrb.com/news/google-fiber-announces-plan-to-fix-exposed-fiber-lines-in/article_fbc678c3-66ef-5d5b-860c-2156bc2f0f0c.html).

But now that’s all off. Google is giving their Louisville subscribers notice that they have only just over two months before their service ends. Go find ye another ISP in a hurry, oh suckers who trusted us!

Google will provide those two remaining months’ service for free, but that’s hardly much consolation for their subscribers who now have to go through all the hassles of setting up alternate services with incumbent carriers who are laughing their way to the bank.

Imagine if one of those incumbent ISPs like a major telephone or cable company tried a shutdown stunt like this with notice of only a couple of months? They’d be rightly raked over the coals by regulators and politicians.

Google claims that this abrupt shutdown of the Louisville system will have no impact on other cities where Google Fiber is in operation. Perhaps so — for now. But as soon as Google finds those other cities “inconvenient” to serve any longer, Google will most likely trot out the guillotines to subscribers in those cities in a similar manner. C’mon, after treating Louisville this way, why should Fiber subscribers in other cities trust Google when it comes to their own Google-provided services?

Ever more frequently now, this seems to be The New Google’s game plan. Treat users — even paying users — like guinea pigs. If they become inconvenient to care for, give them a couple of months notice and then unceremoniously flush them down the toilet. Thank you for choosing Google!

Google is day by day becoming unrecognizable to those of us who have long felt it to be a great company that cared about more than just the bottom line.

Googlers — the rank and file Google employees and ex-employees whom I know — are still great. Unfortunately, as I noted in “Google’s Brain Drain Should Alarm Us All” (https://lauren.vortex.com/2019/01/12/googles-brain-drain-should-alarm-us-all), some of their best people are leaving or have recently left, and it becomes ever more apparent that Google’s focus is changing in ways that are bad for consumer users and causing business users to question whether they can depend on Google to be a reliable partner going forward (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google).

In the process of all this, Google is making itself ever more vulnerable to lying Google Haters — and to pandering politicians and governments — who hope to break up the firm and/or suck in an endless money stream of billions in fines from Google to prop up failing 20th century business models.

The fact that Google for the moment is still making money hand over fist may be partially blinding their upper management to the looming brick wall of government actions that could potentially stop Google dead in its tracks — to the detriment of pretty much everyone except the politicos themselves.

I remain a believer that suggested new Google internal roles such as ombudspersons, user advocates, ethics officers, and similar positions — all of which Google continues to fight against creating — could go a long way toward bringing balance back to the Google equation that is currently skewing ever more rapidly toward the dark side.

I continue — perhaps a bit foolishly — to believe that this is still possible. But I am decreasingly optimistic that it shall come to pass.

–Lauren–