The Challenges of Moderating User Content on the Internet (and a Bit of History)

I increasingly suspect that the days of large-scale public distribution of unmoderated UGC (User Generated Content) on the Internet may shortly begin drawing to a close in significant ways. The most likely path leading to this over time will be a combination of steps taken independently by social media firms and future legislative mandates.

Such moderation at scale may follow the model of AI-based first-level filtering, followed by layers of human moderators. It seems unlikely that today’s scale of postings could continue under such a moderation model, but future technological developments may well turn out to be highly capable in this realm.

Back in 1985 when I launched my “Stargate” experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national “Superstation WTBS,” I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet. My main related concerns back then did not involve hate speech or violent speech — which were not significant problems on the Net at that point — but human nature being what it is I felt that the situation was likely to get much worse rather than better.

What I had largely forgotten in the decades since then though, until I did a Google search on the topic today (a great deal of original or later information on Stargate is still online, including various of my relevant messages in very early mailing list archives that will likely long outlive me), is the level of animosity about that decision that I received at the time. My determination for Stargate to only carry moderated groups triggered cries of “censorship,” but I did not feel that responsible moderation equated with censorship — and that is still my view today.

And now, all these many years later, it’s clear that we’ve made no real progress in these regards. In fact, the associated issues of abuse of unmoderated content in hateful and dangerous ways makes the content problems that I was mostly concerned about back then seem like a soap bubble popping, compared with a nuclear bomb detonating now.

We must solve this. We must begin serious and coordinated work in this vein immediately. And my extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage.

Time is of the essence.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab (repost with new introduction)

The post below was originally published on 10 August 2019. In light of recent events, particularly the storming of the United States Capital by a violent mob — resulting in five deaths — and subsequent actions by major social media firms relating to the exiting President Donald Trump (terms of service enforcement actions by these firms that I do endorse under these extraordinary circumstances), I feel that the original post is again especially relevant. While the threats of moves by the Trump administration against  CDA Section 230 are now moot, it is clear that 230 will be a central focus of Congress going forward, and it’s crucial that we all understand the risks of tampering with this key legislation that is foundational to the availability of responsible speech and content on the Internet. –Lauren–

– – – – – – – – –  –

The Right’s (and Left’s) Insane Internet Content Power Grab
(10 August 2019)

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Recommendation: Do Not Install or Use Centralized Server Coronavirus (COVID-19) Contact Tracing Apps

Everyone, I hope you and yours are safe and well during this unprecedented pandemic.

As I write this, various governments are rushing to implement — or have already implemented — a wide range of different smartphone apps purporting to be for public health COVID-19 “contact tracing” purposes. 

The landscape of these is changing literally hour by hour, but I want to emphasize MOST STRONGLY that all of these apps are not created equal, and that I urge you not to install various of these unless you are required to by law — which can indeed be the case in countries such as China and Poland, just to name two examples.

Without getting into deep technical details here, there are basically two kinds of these contact tracing apps. The first is apps that send your location or other contact-related data to centralized servers (whether the data being sent is claimed to be “anonymous” or not). Regardless of promised data security and professed limitations on government access to and use of such data, I do not recommend voluntarily choosing to install and/or use these apps under any circumstances.

The other category of contact tracing apps uses local phone storage and never sends your data to centralized servers. This is by far the safer category in which resides the recently announced Apple-Google Bluetooth contact tracing API, being adopted in some countries (including now in Germany, which just announced that due to privacy concerns it has changed course from its original plan of using centralized servers). In general, installing and using these local storage contact tracing apps presents a vastly less problematic and far safer situation compared with centralized server contact tracing apps.

Even if you personally have 100% faith that your own government will “do no wrong” with centralized server contact tracing apps — either now or in the future under different leadership — keep in mind that many other persons in your country may not be as naive as you are, and will likely refuse to install and/or use centralized server contact tracing apps unless forced to do so by authorities.

Very large-scale acceptance and use of any contact tracing apps are necessary for them to be effective for genuine pandemic-related public health purposes. If enough people won’t use them, they are essentially worthless for their purported purposes.

As I have previously noted, various governments around the world are salivating at the prospect of making mass surveillance via smartphones part of the so-called “new normal” — with genuine public health considerations as secondary goals at best.

We must all work together to bring the COVID-19 disaster to an end. But we must not permit this tragic situation to hand carte blanche permissions to governments to create and sustain ongoing privacy nightmares in the process. 

Stay well, all.

–Lauren–

Coronavirus Reactions Creating Major Internet Security Risks

As vast numbers of people are suddenly working from home in reaction to the coronavirus pandemic, doctors switch to heavy use of video office visits, and in general more critical information than ever is suddenly being thrust onto the Internet, the risks of major security and privacy disasters that will long outlast the pandemic are rising rapidly. 

For example, the U.S. federal government is suspending key aspects of medical privacy laws to permit use of “telemedicine” via commercial services that have never been certified to be in compliance with the strict security and privacy rules associated with HIPAA (Health Insurance Portability and Accountability Act). The rush to provide more remote access to medical professionals is understandable, but we must also understand the risks of data breaches that once having occurred can never be reversed.

Sloppy computer security practices that have long been warned against are now coming home to roost, and the crooks as usual are way ahead of the game.  

The range of attack vectors is both broad and deep. Many firms have never prepared for large-scale work at home situations, and employees using their own PCs, laptops, phones, or other devices to access corporate networks can represent a major risk to company and customer data. 

Fake web sites purporting to provide coronavirus information and/or related products are popping up in large numbers around the Net, all with nefarious intents to spread malware, steal your accounts, or rob you in other ways.

Even when VPNs (Virtual Private Networks) are in use, malware on employee personal computers may happily transit VPNs into corporate networks. Commercial VPN services introduce their own risk factors, both due to potential flaws in their implementations and the basic technical limitations inherent in using a third-party service for such purposes. Whenever possible, third-party VPN services are to be avoided by corporate users, and these firms and other organizations using VPNs should deploy “in-house” VPN systems if they truly have the technical expertise to do so safely.

But far better than VPNs are “zero trust” security models such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), that can provide drastically better security without the disadvantages and risks of VPNs.

There are even more basic issues in focus. Most users still refuse to enable 2-factor (aka “2-step”) verification systems (https://www.google.com/landing/2step/) on services that support it, putting them at continuous risk of successful phishing attacks that can result in account hijacking and worse. 

I’ve been writing about all of this for many years here in this blog and in other venues. I’m not going to make a list here of my many relevant posts over time — they’re easy enough to find. 

The bottom line is that the kind of complacency that has been the hallmark of most firms and most users when it comes to computer security is even less acceptable now than ever before. It’s time to grow up, bite the bullet, and expend the effort — which in some cases isn’t a great deal of work at all! — to secure your systems, your data, and yes, your life and the lives of those that you care about.

Stay well.

–Lauren–

Iowa Screams: Don’t Trust High-Tech Elections!

For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!

We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.

Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!

Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process. 

In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.

Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.

Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment. 

It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?

We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.

Make your choice.

–Lauren–

How Some Software Designers Don’t Seem to Care About the Elderly

One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.

Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going. 

Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.

Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.

This is a difficult state of affairs, to say the least.

There’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.

They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function. 

We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.

However, we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.

A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.

Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.

I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. Yet what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!

This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.

Yet this morning we engaged in the following tweet thread:

Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)

Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!

Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.

Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.

Somewhere around this point, he closed down the dialogue by blocking me on Twitter.

This was indeed his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.

Of course his anti-vaxx comparison is inherently flawed. There are a variety of programs to help people — who can’t otherwise afford important vaccinations — to receive them. By comparison, vast numbers of elderly persons (often living in isolation) are on their own when dealing with their computers.

The world will keep spinning after Discourse drops IE support.

Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world. 

And that’s an unnecessary tragedy.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Another Breach: What Capital One Could Have Learned from Google’s “BeyondCorp”

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with Capital One, the bank whose credit card customers and card applicants were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates traditional firewalls (and in nearly all instances, VPNs) because there is no longer any need for such devices or systems that, once breached, give an attacker access to internal goodies.

If Capital One had been following BeyondCorp principles, there’d likely be 100+ million fewer potentially panicky people today.

–Lauren–

Earthquakes vs. Darth Vader

When the Ridgecrest earthquake reached L.A. yesterday evening (no damage this far from the epicenter from that quake or the one the previous day) I was “in” a moving elevator under attack in the “Vader Immortal” Oculus Quest VR simulation. I didn’t realize that there was a quake at all, everything seemed part of the VR experience (haptic feedback in the hand controllers was already buzzing my arms at the time).

The only oddity was that I heard a strange clinking sound, that at the time had no obvious source but that I figured was somehow part of the simulation. Actually, it was probably the sound of ceiling fan knob chains above me hitting the glass light bulb fixtures as the fan was presumably swaying a bit.

Quakes of this sort are actually very easy to miss if you’re not sitting or standing quietly (I barely felt the one the previous day and wasn’t immediately sure that it was a quake), but I did find my experience last night to be rather amusing in retrospect.

By the way, “Vader Immortal” — and the Quest itself — are very, very cool, very much 21st century “sci-fi” tech finally realized. My thanks to Oculus for sending me a Quest for my experiments.

–Lauren–

YouTube’s Public Videos Dilemma

So there’s yet another controversy surrounding YouTube and videos that include young children — this time concerns about YouTube suggesting such videos to “presumed” pedophiles.

We can argue about what YouTube should or should not be recommending to any given user. There are some calls for YT to not recommend such videos when it detects them (an imperfect process) — though I’m not convinced that this would really make much difference so long as the videos themselves are public.

But here’s a more fundamental question:

Why the hell are parents uploading videos of young children publicly to YouTube in the first place?

This is of course a subset of a more general issue — parents who apparently can’t resist posting all manner of photos and other personal information about their children in public online forums, much of which is going to be at the very least intensely embarrassing to those children when they’re older. And the Internet rarely ever forgets anything that was ever public (the protestations of EU politicians and regulators notwithstanding).

There are really only two major possibilities concerning such video uploads. Either the parents don’t care about these issues, or they don’t understand them. Or perhaps both.

Various apps and web pages exist that will automatically display YT videos that have few or no current views from around the world. There’s an endless stream of these. Thousands. Millions? Typically these seem as if they have been automatically uploaded by various camera and video apps, possibly without any specific intentions for the uploading to occur. Many of these involve schools and children.

So a possible answer to my question above may be that many YT users — including parents of young children — are either not fully aware of what they are uploading, or do not realize that the uploads are public and are subject to being suggested to strangers or found by searching. 

This leads us to another question. YT channel owners already have the ability to set their channel default privacy settings and the privacy settings for each individual video. 

Currently those YT defaults are initially set to public.

Should YT’s defaults be private rather than public?

Looking at it from a user trust and safety standpoint, we may be approaching such a necessity, especially given the pressure for increased regulatory oversight from politicians and governments, which in my opinion is best avoided if at all possible.

These questions and their ramifications are complex to say the least.

Clearly, default channel and videos privacy would be the safest approach, ensuing that videos would typically only be shared to specific other users deemed suitable by the channel owner. 

All of the public sharing capabilities of YT would still be present, but would require the owner to make specific decisions about the channel default and/or individual video settings. If a channel owner wanted to make some or all of their videos public — either to date or also going forward, that would be their choice. Full channel and individual videos privacy would only be the original defaults, purely as a safety measure.

Finer-grained settings might also be possible, not only including existing options like “unlisted” videos, but also specific options to control the visibility of videos and channels in search and suggestions.

Some of the complexities of such an approach are obvious. More controls means the potential for more user confusion. Fewer videos in search and suggestions limits visibility and could impact YT revenue streams to both Google and channel owners in complex ways that may be difficult to predict with significant accuracy.

But in the end, the last question here seems to be a relatively simple one. Should any YouTube uploaders ever have their videos publicly available for viewing, search, or suggestions if that was not actually their specific and informed intent?

I believe that the answer to that question is no.

Be seeing you.

–Lauren–

A Major New Privacy-Positive Move by Google

Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:

The Google Page That Google Haters Don’t Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google’s “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.

The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.

This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.

Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow. 

My kudos to the Google teams involved!

–Lauren–

Could AI Help Prevent Mass Shootings?

Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.

A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.

It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.

“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.

We can relatively easily list some of the factors that would need to be considered in these respects.

What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?

Obviously — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.

It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.

The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.

To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.

–Lauren–

Pressuring Google’s AI Advisory Panel to Wear a Halo Is Very Dangerous

UPDATE (April 4, 2019): Google has announced that due to the furor over ATEAC (their newly announced external advisory panel dealing with AI issues), they have dissolved the panel entirely. As I discuss in the original post below, AI is too important for our typical political games — and closed-minded unwillingness to even listen to other points of view — to hold sway, and such panels are potentially an important part of the solution to that problem. As I noted, I disagree strenuously with the views of the panel member (and their own organization) that was the focus of the intense criticism that apparently pressured Google into this decision, but I fear that an unwillingness to permit such organizations to even be heard at all in such venues will come back to haunt us mightily in our toxic political environment.

 – – –

Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).

A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels. 

Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week. 

Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.

This is all extraordinarily worrisome. 

While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.

Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.

“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.

Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits. 

The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.

I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.

We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice. 

AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.

–Lauren–

Don’t Blame YouTube and Facebook for Hate Speech Horrors

Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto. 

While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.

Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.

Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.

In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.

Meanwhile, other demands being bandied about are equally specious.

Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.

Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications. 

Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.

Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.

But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem. 

Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.

In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).

The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?

The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites. 

They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.

You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.

The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.

The battle lines are drawn. 

–Lauren–

As Google’s YouTube Battles Evil, YouTube Creators Are at a Crossroads

UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube

 – – –

For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!

The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.

YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.

And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.

YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.

In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.

I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.

I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos. 

This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.

YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.

My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.

In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.

Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.

There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos. 

It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.

I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.

To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.

The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.

While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.

This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.

A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible. 

Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict. 

Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.

There are a variety of other possible approaches as well.

It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.

–Lauren–

Another Positive Move by YouTube: No More General “Conspiracy Theory” Suggestions

A few weeks ago, I noted the very welcome news that Google’s YouTube is cracking down on the presence of dangerous prank and dare videos, rightly categorizing them as potentially harmful content no longer permitted on the platform. Excellent.

Even more recently, YouTube announced a new policy regarding the category of misleading and clearly false “conspiracy theory” videos that would sometimes appear as suggested videos.

Quite a few folks have asked me how I feel about this newer policy, which aims to prevent this category of videos from being suggested by YouTube’s algorithms, unless a viewer is already subscribed to the YouTube channels that uploaded the videos in question.

The policy will take time to implement given the significant number of videos involved and the complexities of classification, but I feel that overall this new policy regarding these videos is an excellent compromise.

If you’re a subscriber to a conspiracy video hosting channel, conspiracy videos from that channel would still be suggested to you.

Otherwise, if you don’t subscribe to such channels, you could still find these kinds of videos if you purposely search for them — they’re not being removed from YouTube.

A balanced approach to a difficult problem. Great work!

–Lauren–

Another Massive Google User Trust Failure, As They Kill Louisville Fiber on Short Notice

It’s getting increasingly difficult to keep up with Google’s User Trust Failures these days, as they continue to rapidly shed “inconvenient” users faster than a long-haired dog. I do plan a “YouTube Live Chat” to discuss these issues and other Google-related topics, tentatively scheduled for Tuesday, February 12 at 10:30 AM PST. The easiest way to get notifications about this would probably be to subscribe to my main YouTube channel at: https://www.youtube.com/vortextech (be sure to click on the “bell” after subscribing if you want real time notifications). I rarely promote the channel but it’s been around for ages. Don’t expect anything fancy.

In the meantime, let’s look at Google’s latest abominable treatment of users, and this time it’s users who have actually been paying them with real money!

As you probably know, I’ve recently been discussing Google’s massive failures involving the shutdown of Google+ (“Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening” – https://lauren.vortex.com/2019/02/04/google-users-panic-over-google-deletion-emails-heres-whats-actually-happening).

Google has been mistreating loyal Google users — among the most loyal that they have and who often are decision makers about Google commercial products — in the process of the G+ shutdown on very short notice.

One might think that Google wouldn’t treat their paying customers as badly — but hey, you’d be wrong.

Remember when Google Fiber was a “thing” — when cities actually competed to be on the Google Fiber deployment list? It’s well known that incumbent ISPs fought against Google on this tooth and nail, but there was always a suspicion that Google wasn’t really in this for the long haul, that it was really more of an experiment and an effort to try jump start other firms to deploy fiber-based Internet and TV systems.

Given that the project has been downsizing for some time now, Google’s announcement today that they’re pulling the plug on the Louisville Google Fiber system doesn’t come as a complete surprise.

But what’s so awful about their announcement is the timing, which shows Google’s utter contempt for their Louisville fiber subscribers, on a system that only got going around two years ago.

Just a relatively short time ago, in August 2018, Google was pledging to spend the next two years dealing with the fiber installation mess that was occurring in their Louisville deployment areas (“Google Fiber announces plan to fix exposed fiber lines in the Highlands” – https://www.wdrb.com/news/google-fiber-announces-plan-to-fix-exposed-fiber-lines-in/article_fbc678c3-66ef-5d5b-860c-2156bc2f0f0c.html).

But now that’s all off. Google is giving their Louisville subscribers notice that they have only just over two months before their service ends. Go find ye another ISP in a hurry, oh suckers who trusted us!

Google will provide those two remaining months’ service for free, but that’s hardly much consolation for their subscribers who now have to go through all the hassles of setting up alternate services with incumbent carriers who are laughing their way to the bank.

Imagine if one of those incumbent ISPs like a major telephone or cable company tried a shutdown stunt like this with notice of only a couple of months? They’d be rightly raked over the coals by regulators and politicians.

Google claims that this abrupt shutdown of the Louisville system will have no impact on other cities where Google Fiber is in operation. Perhaps so — for now. But as soon as Google finds those other cities “inconvenient” to serve any longer, Google will most likely trot out the guillotines to subscribers in those cities in a similar manner. C’mon, after treating Louisville this way, why should Fiber subscribers in other cities trust Google when it comes to their own Google-provided services?

Ever more frequently now, this seems to be The New Google’s game plan. Treat users — even paying users — like guinea pigs. If they become inconvenient to care for, give them a couple of months notice and then unceremoniously flush them down the toilet. Thank you for choosing Google!

Google is day by day becoming unrecognizable to those of us who have long felt it to be a great company that cared about more than just the bottom line.

Googlers — the rank and file Google employees and ex-employees whom I know — are still great. Unfortunately, as I noted in “Google’s Brain Drain Should Alarm Us All” (https://lauren.vortex.com/2019/01/12/googles-brain-drain-should-alarm-us-all), some of their best people are leaving or have recently left, and it becomes ever more apparent that Google’s focus is changing in ways that are bad for consumer users and causing business users to question whether they can depend on Google to be a reliable partner going forward (“The Death of Google” – https://lauren.vortex.com/2018/10/08/the-death-of-google).

In the process of all this, Google is making itself ever more vulnerable to lying Google Haters — and to pandering politicians and governments — who hope to break up the firm and/or suck in an endless money stream of billions in fines from Google to prop up failing 20th century business models.

The fact that Google for the moment is still making money hand over fist may be partially blinding their upper management to the looming brick wall of government actions that could potentially stop Google dead in its tracks — to the detriment of pretty much everyone except the politicos themselves.

I remain a believer that suggested new Google internal roles such as ombudspersons, user advocates, ethics officers, and similar positions — all of which Google continues to fight against creating — could go a long way toward bringing balance back to the Google equation that is currently skewing ever more rapidly toward the dark side.

I continue — perhaps a bit foolishly — to believe that this is still possible. But I am decreasingly optimistic that it shall come to pass.

–Lauren–

Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening

Two days ago I posted “Google’s Google+ Shutdown Emails Are Causing Mass Confusion” (https://lauren.vortex.com/2019/02/02/googles-google-shutdown-emails-are-causing-mass-confusion) — and the reactions I’m receiving make it very clear that the level of confusion and panic over this situation by vast numbers of Google users is even worse than I originally realized. My inbox is full of emails from worried users asking for help and clarifications that they can’t find or get from Google (surprise!) — and my Google+ (G+) threads on the topic are similarly overloaded with desperate comments. People are telling me that their friends and relatives have called them, asking what this all means.

Beyond the user trust abusive manner in which Google has been conducting the entire consumer Google+ shutdown process (even their basic “takeout” tool to download your own posts is reported to be unreliable for G+ downloads at this point), their notification emails, which I had long urged be sent to provide clarity to users, instead were worded in ways that have massively confused many users, enormous numbers of whom don’t even know what Google+ actually is. These users typically don’t understand the manners in which G+ is linked to other Google services. They understandably fear that their other Google services may be negatively affected by this mess.

Since Google isn’t offering meaningful clarification for panicked users — presumably taking its usual “this too shall pass” approach to user support problems — I’ll clarify this all as succinctly as I can — to the best of my knowledge — right here in this post.

UPDATE (February 5, 2019): Google has just announced that the Web notification panel primarily used to display G+ notifications will be terminated this coming March 7. This cuts another month off the useful life of G+, right when we’ll need notifications the most to coordinate with our followers for continuing contacts after G+. Without the notification panel, this will be vastly more difficult, since the alternative notifications page is very difficult to manage. No apologies. No nuthin’. First it was August. Then April. Now March. Can Google mistreat consumer users any worse? You can count on it!

Here’s an important bottom line: Core Google Services that you depend upon such as Gmail, Drive, Photos, YouTube, etc. will not be fundamentally affected by the G+ shutdown, but in some cases visible effects may occur due to the tight linkages that Google created between G+ and other services.

No, your data on Gmail or Drive won’t be deleted by the Google+ shutdown process. Your uploaded YouTube videos won’t be deleted by this.

However, outside of the total loss of user trust by loyal Google+ users, triggered by the kick in the teeth of the Google+ shutdown (without even provision of a tool to help with followers migration – “If Google Cared: The Tool That Could Save Google+ Relationships” (https://lauren.vortex.com/2019/02/01/if-google-cared-the-tool-that-could-save-google-relationships), there will be a variety of other Google services that will have various aspects “break” as a result of Google’s actions related to Google+.

To understand why, it’s important to understand that when Google+ was launched in 2011, it was positioned more as an “identity” product than a social media product per se. While it might have potentially competed with Facebook in some respects, creating a platform for “federated” identity across a wide variety of applications and sites was an important goal, and in the early days of Google+, battles ensued over such issues as whether users would continue to be required to use their ostensibly “real” names for G+ (aka, the “nymwars”).

Google acted to integrate this identity product — that is, Google+ — into many Google services and heavily promoted the use of G+ “profiles” and widgets (comments, +1 buttons, “follow” buttons, login functions, etc.) for third-party sites as well.

In some cases, Google required the creation of G+ profiles for key functions on other services, such as for creating comments on YouTube videos (a requirement that was later dropped as user reactions in both the G+ and YouTube communities where overwhelmingly negative).

Now that consumer G+ has become an “inconvenience” to Google, they’re ripping it out by the roots and attempting to completely eliminate any evidence of its existence, by totally removing all G+ posts, comments, and the array of G+ functions that they had intertwined with other services and third-party sites.

This means that anywhere that G+ comments have continued to be present (including Google services like “Blogger”), those comments will vanish. Users whom Google had encouraged at other sites and services to use G+ profile identities (rather than the underlying Google Account identities) will find those capabilities and profiles will disappear. Sites that embedded G+ widgets and functions will have those capabilities crushed, and their page formats in many cases disrupted as a result. Photos that were stored only in G+ and not backed up into the mainstream Google Photos product will reportedly be deleted along with all the G+ posts and comments.

And then on top of all this other Google-created mayhem related to their mishandling of the G+ shutdown, we have those panic-inducing emails going out to enormous numbers of Google users, most of whom don’t understand them. They can’t get Google to explain what the hell is going on, especially in a way that makes sense if you don’t understand what G+ was in the first place, even if somewhere along the line Google finessed you into creating a G+ account that you never actually used.

There’s an old saying — many of you may have first heard it stated by “Scotty” in an old original “Star Trek” episode: “Fool me once, shame on you — fool me twice, shame on me!”

In a nutshell, this explains why so many loyal users of great Google services — services that we depend on every day — are so upset by how Google has handled the fiasco of terminating consumer Google+. This applies whether or not these users were everyday, enthusiastic participants in G+ itself (as I’ve been since the first day of beta availability) — or even if they don’t have a clue of what Google+ is — or was.

Even given the upper management decision to kill off consumer Google+, the actual process of doing so could have been handled so much better — if there was genuine concern about all of the affected users. Frankly, it’s difficult to imagine realistic scenarios of how Google could have bungled this situation any worse.

And that’s very depressing, to say the least.

–Lauren–

Google’s Google+ Shutdown Emails Are Causing Mass Confusion

UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening

– – –

As I have long been urging, Google is finally sending out emails to Google+ account holders warning them of the impending user trust failure that is the Google+ shutdown. However — surprise! — the atrocious way that Google has worded the message is triggering mass confusion from users who don’t even consider themselves to have ever been G+ users, and are now concerned that other Google services such as Photos, Gmail, YouTube, etc. may be shutting down and associated data deleted (“Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” – https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).

The underlying problem is that many users have G+ accounts but don’t realize it, and apparently Google is sending essentially the same message to everyone who ever had a G+ account, active or not. Because Google has been aggressively urging the creation of G+ accounts (literally until a few days ago!) many users inadvertently or casually created them, and then forgot about them, sometimes years ago. Now they’re receiving confusing “shutdown” messages and are understandably going into a panic.

UPDATE (February 3, 2019): I’m now receiving reports of users (especially ones receiving the notification emails who don’t recall having G+ accounts) fearing that “all their Google data is going to be deleted” — and also reports of many users who are assuming that these alarming emails about data deletion are fakes, spam, phishing attempts, etc. I’m also receiving piles of messages containing angry variations on “What the hell was Google thinking when they wrote those emails?”

During the horrific period some years ago when Google was REQUIRING the creation of G+ accounts to comment on YouTube (a disaster that I rallied against both outside and inside the company at the time) vast numbers of comments and accounts became tightly intertwined between YouTube and G+, and the ultimate removal of that linkage requirement left enormous numbers of G+ accounts that had really only been created by users for YouTube commenting during that period.

So this new flood of confused and concerned users was completely predictable. If I had written the Google+ shutdown emails, I would have clearly covered these issues to help avoid upsetting Google users unnecessarily. But of course Google didn’t ask me to write the emails, so they followed their usual utilitarian approach toward users that they’re in the process of shedding — yet another user trust failure.

But this particular failure was completely preventable.

Be seeing you.

–Lauren–

If Google Cared: The Tool That Could Save Google+ Relationships

UPDATE (February 4, 2019): Google Users Panic Over Google+ Deletion Emails: Here’s What’s Actually Happening

UPDATE (February 2, 2019): Google’s Google+ Shutdown Emails Are Causing Mass Confusion

– – –

One of the questions I’m being frequently asked these days is specifically what could Google have done differently about their liquidation of Google+, given that a decision to do so was irrevocable. Much of this I’ve discussed in previous posts, including those linked within: “Google Finally Speaks About the G+ Shutdown: Pretty Much Tells Users to Go to Hell” (https://lauren.vortex.com/2019/01/30/google-finally-speaks-about-the-g-shutdown-pretty-much-tells-users-to-go-to-hell).

The G+ shutdown process is replete with ironies. The official Google account on G+ is telling users to follow Google on Google competitors like Facebook, Twitter, and Instagram. While there are finally some butter bar banners up warning of the shutdown — as I’ve long been calling for — warning emails haven’t yet apparently gone out to most ordinary active G+ users, but some users who had previously deleted their G+ accounts or G+ pages are reportedly receiving emails informing them that Google is no longer honoring their earlier promise to preserve photos uploaded to G+ — download them now or they’ll be crushed like bugs. 

UPDATE (February 1, 2019): Emails with the same basic text as was included in the G+ help page announcement from January 30 regarding the shutdown (reference is at the “Go to Hell” link mentioned above), are FINALLY beginning to go out to current G+ account holders (and apparently, to some people who don’t even recall ever using G+). 

Google is also recommending that you build blogs or use other social media to keep in touch with your G+ followers and friends after G+ shuts down, but has provided no mechanism to help users to do so. And this is a major factor in Google’s user trust failure when it comes to their handling of this entire situation.

G+ makes it intrinsically difficult to reach out to your followers to get contact information for moving forward. You never know which of your regular posts will actually be seen by any given following user, and even trying to do private “+name” messages within G+ often fails because G+ tends to sort similar profile names in inscrutable ways and in limited length lists, often preventing you from ever pulling up the user whom you really want to contact. This gets especially bad when you have a lot of followers, believe me — I’ve battled this many times trying to send a message to an individual follower, often giving up in despair.

I would assert — and I’m not wholly ignorant of how G+ works — that it would be relatively straightforward to offer users a tool that could be used to ask their followers (by follower circles, en masse, etc.) if they wished to stay in contact, and to provide those followers who were interested in doing so, the means to pass back to the original user a URL for a profile on a different social media platform, or an email address, or hell, even a phone number. Since this would be entirely voluntary, there would be no significant data privacy concerns.

Such a tool could be enormously beneficial to current G+ users, by providing them a simple means to help them stay in touch after G+’s demise in a couple of months. And if Google had announced such a tool, such a clear demonstration of concern about their existing users, rather than trying to wipe them off Google’s servers as quickly as possible and with a minimum of effort, this would have gone far toward proactively avoiding the many user trust concerns that have been triggered and exacerbated by Google’s current game plan for eliminating Google+.

That such a migration assistance tool doesn’t exist — which would have done so much good for so many loyal G+ users, among Google’s most fervent advocates until now — unfortunately speaks volumes about how Google really feels about us.

–Lauren–