An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

UPDATE: 24 May 2023: A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

– – –

All, I am doing something in this post that I’ve never done before over these many years. I’m going to share with you an example of what Google account recovery failure means to the people involved, and this is by no means the worst such case I’ve seen — not even close, unfortunately.

I mentioned yesterday in my other venues how (for many years) I’ve routinely tried to informally help people with Google account recovery issues, because the process can be so difficult for many persons to navigate, and frequently fails. The announcement yesterday of Google’s inactive account deletion policy that I blogged about then:

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

triggered an onslaught of concerns that for a time made my blog inaccessible and even delayed inbound and outbound email processing.

I’m going to include below most of the text from messages I received today from one of my readers about a specific Google account recovery failure — and how that’s affecting a nearly 90-year-old woman. I’ll be anonymizing the message texts, and I’ve of course received permission from the sender to show you this.

Unfortunately, this example is all too familiar for me. It is very much typical of the Google account recovery problems that Google users, so dependent on Google in their daily lives, bring to my attention in the hope that I might be able to help.

I’ve been discussing these issues with Google for many years. I’ve suggested “ombudspeople”, account escalation and appeal procedures that ordinary people could understand, and many other concepts. They’ve all basically hit the brick wall of Google suggesting that at their scale, nothing can be done about such “edge” cases. I disagree. In today’s regulatory and political environment, these edge cases matter more than ever. And I will continue to do what I can, as ineffective as these efforts often turn out to be. -L

 – – – Message Text Begins – – –

Hi Lauren, I tried to help a lovely neighbor (the quintessential “little old lady”) recently with her attempt to recover her legacy gmail account. We ultimately gave up and she created a second, new account instead. She had been using the original account forever (15+ years) and it was created so long ago that she didn’t need to provide any “recovery” contacts at that time (or she may have used a landline phone number that’s long been cancelled now). For at least the last decade, she was just using the stored password to login and check her email. When her ancient iPad finally died, she tried to add the gmail account to her new replacement iPad. However, she couldn’t remember the password in order to login. Because the old device had changed and she couldn’t remember the password and there was no back channel recovery method for her account, there was no way to login. I don’t know if you’ve ever attempted to contact a human being at google tech support, but it’s pretty much impossible. They also don’t seem to have an exception mechanism for cases like this. So she had to abandon hopes of viewing the google photos of her (now deceased) beloved pet, her contacts, her email subscriptions, reminders, calendar entries, etc.

I understand the desire to keep accounts secure and the need to reduce customer support expenses for a free service with millions of users. But it’s also frustrating for end users when there’s no way to appeal/review/reconsider the automated lockout. She’s nearly 90 years old, so I find it remarkable that she’s able to use the iPad. But it’s difficult to know what to say to someone like this when she asks “what can we do now” and there are no options…

I recognize that there are many different kinds of google users. Some folks (like journalists, dissidents, whistleblowers, political candidates, human rights workers, etc.) need maximum security for their communications (and their contacts). In these cases, it makes sense to employ multifactor authentication, end-to-end encryption, one time passwords, and other exceptional privacy and security features. However, there are a great many average users who find these additional steps difficult, frustrating and (esp. in the case of elderly people who aren’t necessarily very technology savvy), sometimes bewildering. It’s tough to explain that your treasured photos can’t be retrieved because you’re not the sort of user that google had in mind. Not everyone is a millennial digital native who finds this all obvious.

 – – – Message Text Ends – – –

–Lauren–

Potentially Serious Issues with Google’s Announced Inactive Accounts Deletion Policy

UPDATE: 24 May 2023: A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

UPDATE (17 May 2023): An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

– – –

Google has announced that inactive personal Google accounts will be removed and all of their data deleted after two years, after a number of emailed reminders:

https://blog.google/technology/safety-security/updating-our-inactive-account-policies/

Right now I’m only going to thumbnail some potentially serious issues with this policy. They deserve a much more detailed examination that I will address when I can, but there are many associated concerns that Google did not address publicly, and these matter enormously because Google is so much a part of so many people’s lives around the planet.

– Will account names become available for reissuing after an account is deleted? Google policy historically has been that used account names are permanently barred from reissuing. I am assuming that this is still the case, but I’d appreciate confirmation. This would be the best policy from a security standpoint, of course.

UPDATE (17 May 2023): I’ve now received confirmation from Google that account names will not be reissued after these account deletions. Good.

– Given the many ways that users can lose access to their Google accounts, including password and other authentication confusion, lockouts in error due to location login issues, and many other possibilities related to authentication and account recovery complexities, I am not convinced that deleting user data after two years of inactivity is a wise policy. While keeping the data around forever is impractical, two years seems very short from a legal standpoint in an array of ways, even if routine user access is blocked after two years of inactivity. While many users locked out of their accounts simply create new accounts, many still have crucial data in those “trapped” accounts, and most users unfortunately do not use the “Takeout” facilities Google provides to download data while accounts are still active.

 – The impact on user photos and public YouTube videos are especially of concern. Many popular and important YouTube videos are associated with very old accounts that are likely effectively abandoned. The loss of these public videos from YouTube could be devastating.

UPDATE (17 May 2023): While their original announcement yesterday said that YouTube videos would be deleted when accounts were deleted under this policy, Google has responded to concerns about YouTube videos and has now made a statement that “At this time, we do not plan to delete accounts with YouTube videos.” Obviously this leaves some related open questions for the future, but is still great news.

– Many people use Google accounts for logging in to non-Google sites via federated login (“Login with Google”) mechanisms. While Google says these logins will continue to constitute activity, many of these accounts are likely fairly old and their associated users may not have used them for anything directly on Google for years (including reading emails). If they also have not been logging on to those third party sites for extended periods, when they do try again they’re likely to be quite upset to find their Google accounts necessary for access have been deleted.

I could go on but for now I just wanted to point out a few of the complex negative ramifications of Google’s policy in this regard, irrespective of their assertion that they’re meeting “industry standards” related to account retention and deletion. 

As it stands, I predict that a great many people are going to lose an enormous amount of data due to this Google policy — data that in many cases is very important to them, and in the case of YouTube, often important to the entire world.

–Lauren–

How Google Broke Chrome Bookmarks Sync

UPDATE (15 May 2023): And … about 48 hours after this original post, bookmarks starting successfully syncing in full to my tablet, after months of failing totally (despite my many best efforts and every sync trick I know). Coincidence? Could be. But I’ll say “Thanks Google!” anyway. 

– – – – – –

Greetings. Recently I asked around for suggestions to help figure out why (after trying all the obvious techniques) I could no longer get my Chrome bookmarks to sync to my primary Android 13 tablet.

Now, courtesy of a gracious #Mastodon user who pointed me at this recent article, I have the answer as to the why. But there’s no apparent fix. Bookmark sync is now broken for power users in significant ways:

https://www.androidpolice.com/google-chrome-bookmark-sync-limit/

In brief, Google appears to have imposed (either purposefully or not) an undocumented limit to the number of bookmarks permitted to be synced between devices. If you exceed that limit, NO bookmarks appear to usually sync — you can end up with no bookmarks at all on most affected devices.

In my case, my Android 13 phone is still syncing all bookmarks correctly, while my tablet has no bookmarks, and shows the “count limit exceeded” error in chrome://sync-internals that the above article notes.

The article suggests that the new undocumented limit is 100K for desktops and 20K for mobile devices. It turns out that I have just over 57K bookmarks currently, so why the limit is exceeded on the tablet and not on the phone is a mystery. But having ZERO synced bookmarks on the tablet is a real problem.

Yeah, there are third party bookmark managers and ways to create bookmark files that could be viewed statically, but the whole point of Chrome bookmark sync is keeping things up to date across all devices. This needs to work!

And if you feel that 57K bookmarks is a lot of bookmarks — you’re right. But I’ve been using Chrome since the first day of public availability, and my bookmarks are the road maps to my use of the Net. For them to just suddenly stop working this way on a key device is a significant problem.

I’d appreciate some official word from Google regarding what’s going on about this. Have they established new “secret” limits? Is this some sort of bug? (The error message suggests not.) Please let me know, Google. You know how to reach me. Thanks. 

–Lauren–

Big Tech Needs to Vastly Improve Their Public Communications — or Potentially Face a Political Train Wreck Over AI (and More)

In several of my past recent posts:

The “AI Crisis”: Who Is Responsible?
https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsible

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare
https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

and others in various venues, I have expressed concerns over the “perfect storm” that is now circling “Big Tech” from both sides of the political spectrum, with both Republicans and Democrats proposing (sometimes jointly, sometimes in completely opposing respects) “solutions” to various Internet-related issues — with some of these issues being real, and others being unrealistically hyped.

The latest flash point is AI — Artificial Intelligence — especially what’s called generative AI — publicly seen mainly as so-called AI chatbots.

I’m not going to repeat the specifics of my discussions on these various topics here, except in one respect.

For many (!) years I have asserted that these Big Tech firms (notably Google, but the others as well to one degree or another) have been negligently deficient in their public communications, failing to adequately assure that ordinary non-technical people — and the politicians that they elect — understand the true nature of these technologies.

This means both the positive and negative aspects of tech. But the important point is that the public needs to understand the reality of these systems, and not be misguided by misinformation and often politically-biased disinformation that fill the information vacuum left by these firms, often out of a misguided and self-destructive fear of so-called “Streisand Effects”, which the firms are afraid will occur if they mention these issues in any depth.

It is clear that such fears have done continuing damage to these firms over the years, while robust public communications and public education — not looking down at people, but helping them to understand! — could have instead done enormous good.

I’ve long called for the hiring of “ombudspersons” or liaisons — or whatever you want to call them — to fill these important, particular communications roles. These need to be dedicated roles for this purpose.

The situation has become so acute that it may now be necessary to have roles specific to AI-related public communications to help avoid the worst of the looming public relations and political catastrophes, that could decimate the positive aspects of these systems, and over time seriously damage the firms themselves.

But far more importantly, it’s society at large that will inevitably suffer when politics and fear win out over a true understanding of these technologies — how they actually impact our world in a range of ways — again, both positive and negative, both now and into the future.

The firms need to do this now. Right now. All of the greatest engineering in the world will not save them (and us!) if their abject public communications failures continue as they have to date.

–Lauren–

The “AI Crisis”: Who Is Responsible?

There is a sense of gathering crisis revolving around Artificial Intelligence today — not just AI itself but also the public’s and governments’ reactions to AI — particularly generative AI.

Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.

I find much more blame — and the related central problem of the moment — with some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.

While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments’ reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well — these both come together with any new technology) to the world.

By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance — and limitations! — of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.

If this doesn’t start changing for the better immediately, today’s controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future. 

–Lauren–

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare

The new Utah Internet ID age laws signed today — and what other states and the feds are moving toward in the same realm — will destroy social media and much else of the Internet as we know it.

Vast numbers of people will refuse to participate in any government ID-based scheme for age verification, no matter how secure and compartmented it is claimed to be (e.g. through third-party verifiers).

Many persons, rightly concerned about basic privacy rights, already use different names and specify different birthdays on different sites, to avoid being subjected to horrific problems in the case of data breaches, and to avoid being tracked across sites discussing unrelated topics.

These government moves are clear steps on the way toward creating a Chinese-style Internet where every individual’s Internet usage is tracked and monitored by the government, creating a vast and continuous climate of fear, oppression, and government control.

–Lauren–

Giving Creators and Websites Control Over Generative AI

Seemingly overnight, the Internet is awash with controversies over Generative Artificial Intelligence (GAI) systems, and their potential positive and negative impacts on the Net and the world at large.

It also seems very clear that unless we (for once!) get ahead of the potential problems with this new technology that seem to be rushing toward us like a freight train, there could be some very tough times ahead for creators, websites, and ordinary Internet users around the world.

I’m not writing a tutorial here on GAI, but very briefly it’s not the kind of “backend” AI systems with which most of us are more familiar, used for research and modeling, sorting the order of search results and suggestions, and even the kinds of generally useful very brief “answers” we see as (for example) Google Knowledge Panels, featured snippets, or short Google Assistant answers (and the similar features of other firms’ products).

GAI is very different, because it creates (and this is a greatly simplified explanation) what appears to be (at least in theory) completely *new* content, based on its algorithms and the data on which it has been trained.

GAI can be applied to text, audio, imagery, video — pretty much everything we’ve come to associate with the Net. And already, serious problems are emerging — not necessarily unexpected at this early stage, but ones that we must start dealing with now or risk a maelstrom later.

GAI chatbots have been found to spew racist and other hateful garbage. The long-form answers and essays that are the stock-in-trade of many GAI systems can be beautifully written, appear knowledgeable and authoritative — but still be riddled with utterly incorrect information. This can be a hassle indeed even with purely technical articles that have had to be withdrawn as a result, but can get downright scary when they involve, as in one recent case, an article on men’s health issues.

There are more problems. GAI can easily create “fake” pornography targeting individuals. It can be used to simulate people’s voices for a range of nefarious purposes — or even potentially just to simulate the voices of professional voice actors without their permission.

Eventually, the kind of scenario imagined in the 1981 film “Looker” — where actors once scanned could be completely emulated by (what we’d now call) GAI systems — could actually come to pass. We’re getting quite close to this already in the film industry and the world of so-called deepfakes — the latter potentially carrying enormous risks for disinformation and political abuse.

All of this tends to point us mainly in one direction: How GAI is trained.

In many cases, the answer is that websites are crawled and their data used for GAI purposes, without the explicit permission of the creators of that data or the sites hosting it.

Since the beginning of Search on the Internet, there has been something of a largely unwritten agreement. To wit: Search engines spider and index sites to provide lists of search results to users, and in return those search engines refer users back to those original sites where they can get more information and find other associated content of interest.

GAI in Search runs the risk of disrupting this model in major ways. Because by presenting what appear to be largely original long-form essays and detailed answers to user search queries, the probability of users ever visiting those sites that (often unknowingly) provided the GAI training data, even when links are present, is likely to drop precipitously. Even with links back provided by the GAI answers, why are users going to bother visiting those sites that provided the data to the GAIs, if the GAIs have already completely answered those users’ questions?

Complicating this even further is that the outputs of some GAI systems appear to frequently include largely or even completely intact (or slightly reworded) stretches of text, elements of imagery, and other data that the GAI presents as if they were wholly original.

Creators and websites should be able to choose if and how they wish their data to be incorporated into GAI systems. 

Accomplishing this will be a complex undertaking, likely involving both technical and legislative aspects in order to be even reasonably effective, and will almost certainly always be a moving target as GAI systems advance.

But a logical starting point could be expansion of the existing Internet Robots Exclusion Protocol (REP — e.g. robots.txt, meta tags, etc.) currently used to express website preferences regarding search indexing and associated functions. While the REP is not universally adhered to today, major sites usually do follow these directives.

Indeed, even defining GAI-related directives for REP will be enormously challenging, but this could get the ball rolling at least.

We need to immediately start the process of formulating the control methodologies for what training data Generative Artificial Intelligence systems are permitted to use, and the manners in which they do so. Failure to begin considering these issues risks enormous backlash against these systems going forward, which could render many of their potential benefits moot, to the detriment of everyone.

–Lauren–

2023 and Social Media’s Winds of Change

Greetings. The last hours and minutes of 2022 are ticking off, and we’re all being drawn inexorably into the new year and even deeper into the 21st century.

In my previous post of early October — Social Media Is Probably Doomed — I discussed various issues that call into question the ability of social media as we’ve known it to continue for much longer. Since then we’ve seen the massive chaos at Twitter when Musk took over, the rapid rise of distributed social media ecosystem Mastodon, and an array of other new confounding factors that make this analysis notably more complex and less deterministic. 

It’s perhaps interesting to note that only a year ago, pretty much nobody had predicted that Elon Musk would — voluntarily, single-mindedly, and over such a short period of time — have reinvented himself as a pariah to a large segment of his customers and the public at large, and be in a position to remake Twitter in the image of the very worst that social media can offer.

The lessons that we can draw from this are many, beyond the obvious ones such as that dramatic, abrupt changes in the tech world — and broader society — should be considered more the norm than the exception, especially in our current toxic political environment.

And it’s important to note that no technology — nor the persons who develop, deploy, operate, or use it — is immune from such disruptions.

This includes Mastodon of course. And while the distributed nature of this ecosystem perhaps provides some additional buffering from sudden changes that more centralized services usually do without, that does not suggest invulnerability to many of the same kinds of problems plaguing other social media, despite best intentions.

And this is definitely not to assert that blindly attempting to resist changes is the proper course. In fact, *not* being willing to appropriately evolve with a massive growth in the quantity of users — especially as increasingly more nontechnically-oriented persons arrive — is likely lethal to a social media ecosystem in the long run.

As we stand on the cusp of 2023, there is immense potential in Mastodon and other distributed social media models. But there are also enormous risks — fear of change being among the most prominent and potentially negatively impactful of these.

Given all that’s happening, I suspect that this coming year will be a crucial turning point for social medial in many ways — both technical and nontechnical in scope.

We can try to hold back the winds of change in these regards, or we can endeavor to harness them for the good of all. That, my friends, is not the choice of technology itself, it is solely up to us.

All the best to you and yours for a great 2023. Happy New Year!

–Lauren–

Social Media Is Probably Doomed

UPDATE (31 December 2022): 2023 and Social Media’s Winds of Change

– – – – – –

Social media as we’ve known it is probably doomed. Whether a decline in social media would on balance be good or bad for society I’ll leave to another discussion, but the handwriting is on the wall for a major decline in social media overall.

As with most predictions, the timing and other details will surface in coming months and years, but the overall shape of things to come is not terribly difficult to visualize.

The fundamental problem is also clear enough. A vast range of entities at state, federal, and international levels are in the process of enacting, invoking, or otherwise planning a range of regulatory and other legal mandates that would apply to social media firms — with many of these requirements being in direct and total opposition to each other.

The most likely outcome from putting these firms “between a rock and hard place” will be a drastic reduction of social media services provided, resulting in a massive decrease in ordinary persons’ ability to communicate publicly, rather than the increase that various social media critics have been anticipating.

Let’s very briefly review just some of the factors in the mix:

The political Right in the U.S. generally wants public postings to stay up, even if they contain racist or other hate speech or misinformation/disinformation. This is the outline of the push from states like Texas and Florida. Meanwhile, the Left and other states like California want more of the same sort of postings taken down even faster than they are now. Unless you can somehow provide different feeds on a posting by posting basis to users in different states (and what of VPN usage from other areas?), this creates an impossible situation.

Both the Left and Right hate Section 230, but for opposite reasons, relating to my point just above. Even the Biden White House has this wrong, arguing that cutting back 230 protections would force social media firms to more tightly moderate content, when in reality tampering with 230 would make hosting most UGC (User Generated Content) far too risky.

Elon Musk has proposed that Twitter carry any postings that aren’t explicitly illegal or condoning violence. This suggests an increase in the kind of hate speech and disinformation that not only drives away many users, but also tends to cause enormous problems for potential advertisers and network infrastructure providers, who usually do not want to be associated with such materials. And then of course there’s the EU — which has its own requirements (much more robust than in the U.S.) for dealing with hate speech and misinformation/disinformation.

There are calls to strip Internet users of all anonymity, to require use of real names (tied to official IDs, perhaps through some third party mechanisms) based on the theory that this would reduce hate speech and other attack speech. Yet studies have shown that such abhorrent speech continues to flower even when real names are used, while forcing real names causes already marginalized persons and groups to be even further disadvantaged, often in dangerous ways. Is there a middle ground on this? Perhaps requiring IDs be known to a third party (in case of abuse) before posting to large numbers of persons is permitted, but still permitting the use of pseudonyms for those postings? Maybe, but it seems like a long shot. 

Concerns over posting of terrorist content, live streaming of shootings, and other nightmarish postings have increased calls for pre-moderation of content before it goes public. But at the massive scale of the large social media firms, it’s impossible to see how this could be practical, for a whole range of reasons, unless the amount of content permitted from the public were drastically reduced.

And this is just a partial list. 

For social media to have any real value and practicality, it can’t operate on a reasonable basis when every state, every country may demand a different and conflicting set of rules. While there are certainly some politicians and leaders who do understand these issues in considerable depth, many others don’t worry about whether their technical demands are practical or what the collateral damage would be, only whether they’re good for votes come the next election.

And now we reach that part of this little essay where I’m expected to announce my preferred solution to this set of problems. Well dear readers, I’ve got nothing for you. I don’t see any practical solutions for these dilemmas. The issues are in direct conflict and opposition, and there is no obvious route toward their reconciliation or harmonization. 

So I can do little more here than push the needle into the red zone, sound the storm warnings, and try to point out that the paths we’re taking — absent some almost unimaginable changes in the current patterns — are rocketing us rapidly toward a world of social media that will likely briefly flare brightly and then go dark, like an incandescent light bulb at the end of its life, turned on just one too many times.

This analogy isn’t perfect of course, and there will continue to be some forms of social media under any circumstances. But the expected experience seems most likely to become increasingly constrained over time, along with all other aspects of publicly accessible user-provided materials — the incredible shrinking content.

As I said earlier, nobody knows how long this process will take. It won’t happen overnight. But we’ll have taken the path into this wilderness of our own free will, eyes wide open.

Please don’t forget to turn off the lights on your way out.

–Lauren–

How to Fix Google’s Gmail Political Spam Bypass Plan

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

– – – – – –

Recently in Google’s Horrible Plan to Flood Your Gmail with Political Garbage I discussed Google’s plan to permit “official” political emails to bypass Gmail spam filters, with users able to opt-out from this bypass only on a sender-by-sender basis as political emails arrive. So as new “official” political senders proliferate, this will be a continuing unwanted exercise for most Gmail users.

The Federal Election Commission has now posted a draft decision that effectively gives Google a go ahead for this plan (UPDATE: 11 August 2022: The FEC has now officially approved the plan). The large number of comments received by the FEC regarding this proposal were overwhelmingly negative (it was difficult to find any positive comments at all), but the FEC is only ruling on the technical question of whether such a plan would represent prohibited in-kind political contributions.

My view is that Gmail users should be able to opt-out of this entire political spam bypass plan if that is their choice. Political emails would in that case continue going into those individual users’ spam folders to the same extent that they do now.

My specific recommendation:

The first time that a political email arrives for a Gmail user that would bypass spam filtering under the Google plan, the Gmail user would be presented with a modal query with words to this effect (and yes, wording this properly will be nontrivial):

Do you want official political emails to arrive in your Gmail inbox rather than any of them going to your spam folder, unless you indicate otherwise regarding specific political email senders? You can change this choice at any time in Gmail Settings.
(TELL ME MORE)
YES
NO

There is no “default” answer to this query. Users must choose either YES or NO to proceed (with the TELL ME MORE choice branching off to an explanatory help page).

This is a matter of showing respect to Gmail users. The political parties do not own Gmail users’ inboxes, but users who are concerned about missing political emails that might otherwise go to the spam folder would be able to participate in this program, while other users would not be forced into participation against their wills.

Of course this will not satisfy some politicians who incorrectly assume that so much political email ends up in spam due to a claimed political bias against them by Google. In fact, Google applies no political bias at all to Gmail — so much political email ends up in spam precisely because that’s where most Gmail users want it to be.

Google is between the proverbial rock and a hard place on this matter, but I’m asking Google to side with their users. I’d prefer that the Gmail political spam bypass plan not be deployed at all, but if it’s going to happen than let’s give Google’s users a choice to participate or not, right up front.

It’s the Googley thing to do.

–Lauren–

Google’s Horrible Plan to Flood Your Gmail with Political Garbage

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

UPDATE (11 August 2022): The Federal Election Commission has now officially approved this Google plan.

UPDATE (3 August 2022): How to Fix Google’s Gmail Political Spam Bypass Plan

UPDATE (3 August 2022): A Federal Election Commission Draft APPROVES this plan. See: https://www.fec.gov/files/legal/aos/2022-14/202214.pdf

UPDATE (19 July 2022): Public comments on this proposal can now be viewed here on the Federal Election Commission site.

UPDATE (14 July 2022): The Federal Election Commission today extended the public comment period for this issue from a deadline of July 16 to a new ending date of August 5th. I have updated this post accordingly.

– – – – – –

Google is backed into a corner, and Google’s attempt to get out of this corner could be very bad for Gmail users. You have just a few weeks remaining to make your opinion known about this. Please read on.

While Google studiously avoids political bias, the GOP has been bitching for ages with the ludicrous claim that Google is purposely directing GOP political emails into Gmail users’ spam folders. The GOP asserts that Google directs more political emails from Republicans than from Democrats into the spam jail, and that this is because (the GOP claims) Google hates Republicans. 

Not true. The reason more GOP political emails end up in spam is that spam is exactly where most Gmail users want those emails to be.

While both Democrats and Republicans are guilty of sending unwanted, unsolicited political emails, the fact is that Republicans send more in quantity, and they tend to be more insidious, including traps like automatic recurring payments after supposedly one-time donations, and claims (like repeating Trump’s Big Lie about the 2020 election) that are misleading at best and often ludicrous and dangerous. This crap deserves to be in spam.

In an attempt to get out from under what are mostly GOP complaints, Google has asked the Federal Election Commission for approval for a plan to make emails from authorized candidate committees, political party committees and leadership political action committees registered with the FEC exempt from spam detection, as long they abide by Gmail’s rules on phishing, malware and illegal content.

There’s stuff in there about notifying users the first time that they get one of these emails from a campaign so that they can (supposedly) opt-out and other details. It doesn’t matter. This plan will bury many Gmail users under a mountain of stinking swill. 

Google’s plan will never work, for a couple of reasons.

One is that campaign and other political mailings multiply and spread like a hideous plague. I’ve had the unpleasant experience of helping a Gmail user clean up the mess created when they subscribed to a single political website, in this case, yes, a Trump site that later was found to be soliciting funds for one purpose but actually using them for something else entirely. Big surprise, huh? 

In almost no time at all, this had metastasized into political mailings from affiliated groups spouting lies and begging for money, mixed in with all manner of political-appearing phishing attempts and other scams. These were showing up in his Gmail literally every few minutes. An utter nightmare. This doesn’t happen only with that GOP — though they’re the larger culprit in this saga.

The second reason that the Google plan will fail is that it will never satisfy the GOP. They’ve already proposed legislation that would make it illegal to send political email into spam. They want you to see all of it, every single word, whether you want to see it or not, whether you ever asked to see it or not.

The bottom line is that the Google plan will result in your Gmail inbox being flooded with unsolicited political garbage, that you’ll need to sort through and try (good luck!) to unsubscribe to. Whether you’re a Democrat, a Republican, an Independent, or something else entirely, this probably isn’t how you really want to be spending your days.

Again, I realize that Google has been unfairly forced into this position, but that can’t and doesn’t give this plan a pass.

The Federal Election Commission is now allowing for public comments until August 5th regarding this terrible idea. You can email your comments to:

ao@fec.gov

Please note that such emails may become part of the publicly inspectable public record related to this issue.

It’s been many years since I’ve seen a worse proposal related to email spam, and it’s very unfortunate that Google has been forced into this situation. But that’s where we are, so speak now or forever hold your peace.

–Lauren–

My Thoughts About Google’s New Blog Post Regarding Health-Related Data Privacy

In my very recent post:

“Internet Users’ Safety in a Post-Roe World”

I expressed concerns regarding how Internet and telecommunications firms would protect women’s and others’ data in a post-Roe v. Wade world of anti-abortion states’ health data demands.

Google has now briefly blogged about this, at:

“Protecting people’s privacy on health topics”

The most notable part of the Google post is the announcement of this important change:

“Location History is a Google account setting that is off by default, and for those that turn it on, we provide simple controls like auto-delete so users can easily delete parts, or all, of their data at any time. Some of the places people visit — including medical facilities like counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others — can be particularly personal. Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.”

I definitely endorse this change, which aligns with the suggestions in my above referenced blog post regarding handling of sensitive location data. Thank you Google for taking this crucial action. This is an excellent start.

However, not yet publicly addressed by Google are the issues I noted regarding how these sensitive topics in search histories (both as stored by Google itself and/or on browsers) could also be abused by anti-abortion states hell-bent on pursuing women and others as part of those states’ extremist agendas, including in many instances abortion bans without exceptions for rape and incest.

Again, I praise Google for their initial step regarding location data, but there’s much more work still to do!

–Lauren–

Social Media Sites Should Be Required to ID Many Users

Greetings. I write the following with no joy whatsoever.

I have reluctantly come to the conclusion that it may be necessary to legislate that any social media user who wishes to have their posts seen by more than a small handful of users will need to be authenticated by any (significantly-sized) sites, using government IDs.

This identification information would be retained by the firms so long as the users are active and for some specified period afterwards. Users would *not* be required to use their real names for posts, but the linkages to their actual IDs would be available to authorities in cases of abuse under appropriate, precisely defined circumstances, subject to court oversight. 

This would include situations where a post may be forwarded to larger audiences by others, which will be a technical challenge to implement.

The ability to reach large audiences on today’s Internet should be a privilege, no longer a right.

It is very sad that it has come to this.

–Lauren–

Internet Users’ Safety in a Post-Roe World

UPDATE (1 July 2022): My Thoughts About Google’s New Blog Post Regarding Health-Related Data Privacy

UPDATE (24 June 2022): As expected, the U.S. Supreme Court today overturned Roe v. Wade, bringing the issues discussed below into immediate focus.

TL;DR: By no later than early this July, it is highly probable that a nearly half-century nationwide precedent providing women with abortion-related protections will be partly or completely reversed by the current U.S. Supreme Court (SCOTUS). This sea change, especially impacting women’s rights but with even broader implications now and into the future, would immediately and dramatically affect many policy and operational aspects of numerous important Internet firms. Unless effective planning for this situation takes place imminently, the safety of women, the well-being of Internet users more generally, and crucial services of these firms themselves will in all likelihood be at risk in critical respects.

– – – – – –

Since the recent leak of a SCOTUS draft decision that would effectively eliminate the national protections of Roe v. Wade, and subsequent remarks by some of the associated justices, it is now widely assumed that within a matter of days or weeks a partial or total reversal of Roe will revert the vast majority of abortion-related matters back to the individual states. 

Many politicians and states have already indicated their plans to immediately ban most or even all abortions, including in some cases those related to rape and incest, and even those to preserve the health of the woman, with only narrow exceptions even to save mothers’ lives. Some of these laws may effectively criminalize miscarriages. Some may introduce both civil and criminal penalties related to abortion, possibly bringing homicide or murder charges against involved parties, potentially including the pregnant women. 

Various states plan to try extending their bans and civil/criminal penalties to include anyone who “participates” in making abortions possible, even if they are in other states, as when a woman travels to a different state for an abortion (the legality of one state attempting to impact actions in another state in this manner is unclear, but with today’s SCOTUS no possibilities can be safely ignored). Actions by some states to try ban obtaining, ordering, or providing various abortion drugs are also already being enacted. Note that SCOTUS has to date permitted to continue the Texas mechanism for suing abortion providers, which has largely blocked abortions in that state.

“Trigger laws” already in place in some states along with the statements of state legislators indicate that near total or total abortion bans will immediately become law in various states if the anticipated SCOTUS decision is announced. 

Anti-abortion and affiliated factions are already planning — using the reasoning of the expected SCOTUS decision as a foundation — for follow-up actions pushing for national abortion bans, limits on contraception, banning gay marriage, rolling back LGBTQ+ rights, and related activities. U.S. Senate Republican Leader Mitch McConnell has recently proclaimed that a nationwide abortion ban is possible if the GOP retakes the House, Senate, and presidency. 

These events are creating what could become an existential threat to many Internet users and to key aspects of many Internet firms’ policy and operational models.

Given the sweeping and unprecedented scope of the oppressive laws that would be unleashed on pregnant women and anyone else who becomes involved with their healthcare, especially given the civil and even criminal penalties being written into these laws, it seems inevitable that demands for access to data in the possession of many Internet and telecommunications firms relating to user activities will drastically increase.

Search histories (both server and browser) and potentially even stored email data could be sought looking for queries about abortion services, abortion drugs, and numerous other related topics. Location data (both targeting specific users, and data from broader geofence warrants associated with, for example, abortion providers) could be demanded. A range of other resulting data demands are also highly probable. It is also expected that there would be even more calls for government-mandated backdoors into end-to-end encrypted messaging systems.

Women may put their health and lives at risk by not seeking necessary health services, for fear of these abortion laws. Women’s partners, other family members, friends, associates, and healthcare providers may reasonably believe that their livelihoods or freedom may compromised if they are found to be providing or aiding in any manner related to abortion services. 

Many users may cease using Internet and various telecommunications services in the manners that they previously would have, out of concerns that their related activities and other data could ultimately fall into the hands of state or other officials, and then be used to track and potentially prosecute them under these abortion-related laws.

This situation is a Trust & Safety emergency of the first order for all of these firms.

While some firms already provide users a range of search/location history control tools, I would assert that most users do not understand them and are frequently unaware of how they are actually configured.

I believe that the best mechanism at this time to help protect women and affiliated others who would be victimized by these state actions is to not save the associated data in the first place, unless a user decides that they desire to have that data saved.

One possibility would be for these firms to proactively offer users the option to not save (or alternatively, very quickly expunge) their search, location, and other user activity data associated with abortion and important related issues — both on company servers, and within browser histories if practicable. Users who wished to have any of these categories of data activity saved as before could choose not to exercise this option.

Unfortunately, a database of users who opt out of having this data saved may itself be an attractive data demand target by parties who may assume that it mainly represents individuals attempting to hide activities related to abortions. This possibility may argue for the preferred default behavior being to not save this data, and offering users the option of saving it if they so choose.

While these changes could be part of a desirable broader effort to give users more control over which specific aspects of their “personally sensitive” activity data are saved, this would of course be a significantly larger project, and time is of the essence given the imminent SCOTUS ruling. 

Obviously I am not here addressing the detailed legal considerations or potential technical implementation challenges of the proposals above, and there may exist other ways to quickly ameliorate the risks that I’ve described, though practical alternatives are not obvious to me at present.

However, I do feel strongly that the status quo regarding user activity data in a post-Roe environment could create a nightmarish situation for many women and other Internet users, and be extraordinarily challenging for firms from Trust & Safety and broader policy and operational aspects. 

I strongly recommend that actions be taken immediately to protect Internet users from the storm that will likely arrive very shortly indeed.

–Lauren–

Big Tech and the Internet Are Not Our Enemies

It seems like only a few years ago, the entire world was enamored of Big Tech and the Internet — and pretty much everyone was trying to emulate their most successful players. But now, to watch the news reports and listen to the politicians, the Internet and Big Tech are Our Enemies, responsible for everything from mass shootings to drug addiction, from depression to child abuse, and seemingly most other ills that any particular onlooker finds of concern in our modern world.

The truth is much more complex, and much more difficult to comfortably accept. For the fundamental problems we now face are not the fault of technology in any form, they are fully the responsibility of human beings. That is, as Pogo famously said, “We have met the enemy, and he is us.”

What’s more, most users of social media and other Internet services don’t realize how much they have to lose as a result of the often politically motivated faux “solutions” being proposed (and in some cases already passed into law) that could literally cripple many of the sites that billions of us have come to depend upon in our daily lives.

Hate speech, for example, was not invented by the Internet. While it can certainly be argued that social media increased its distribution, the intractable nature of the problem is clearly demonstrated by calls from the Right to leave most hate speech available as legal speech (at least in the U.S. — other countries have different legal standards regarding speech), while the Left (and many other countries) want hate speech removed even more rapidly. Both sides propose draconian penalties for failures to comply with their completely opposite demands.

In the U.S., some states have already passed laws explicitly prohibiting Big Tech from removing wide ranges of speech, much of which would be considered hateful and/or outright disinformation. These laws are currently unenforced due to court actions, but not on a permanent basis at this time.

The utter chaos that would be triggered by enforcement of such laws and associated attempts to undermine crucial Communications Decency Act Section 230 are obvious. If firms are required by law not to remove speech that they consider to be dangerous misinformation or hate speech, they will almost certainly find themselves cut off from key service providers that they need to stay in operation, who won’t want to keep doing business with them. Perhaps laws would then be passed to try require that those providers not cut off social media firms in such cases. But what of advertisers who do not wish to be associated with vile content? Laws to force them to continue advertising on particular sites are unlikely in the extreme.

Similar dilemmas apply to most other areas of Big Tech and the Internet that are now the subject of seemingly endless condemnation. There are calls for end-to-end encryption of chat systems and other direct messaging to protect private conversations from outside surveillance and tampering — but there are simultaneously demands that governments be able to see into these conversations to try detect child abuse or possible mass shooter events before they occur. Another enormous category of conflicting demands will arise as the U.S. Supreme Court drastically scales back fundamental protections for women.

Even if encryption were banned (a ban that we know would never be anywhere near 100% effective), the sheer scale of the Internet in general, and of social media in particular, are such that no currently imaginable combination of human beings and artificial intelligence could usefully scan and differentiate false positives from genuine threats among the nearly inconceivably enormous volumes of data involved. False positives have real costs — they divert scarce resources from genuine threats where those resources are desperately needed.

Big Tech now finds itself firmly between the proverbial rock and the hard place. Governments, politicians, and others are demanding changes that in many cases aren’t only in 180 degree opposition (“Take down violating posts faster! No, leave them up — taking them down is censorship!”), but are also calling for technologically impractical approaches to monitoring social media (both public postings and private messages/chats) at scale. Many of these demands would lead inevitably to requiring virtually all social media posts to be pre-moderated and pre-approved before being permitted to be seen publicly. Every public post. Every private chat. Every live stream throughout the totality of its existence.

Only in such or similar ways could social media firms meet the demands being strewn upon it, even if the inherent conflicts in demands from different groups and political factions could somehow be harmonized, even leaving aside associated privacy concerns.

But this is actually entirely academic at the kinds of scales at which users currently post to social media. Such pre-moderation is not possible in any kind of effective way without drastically reducing the total volume of user content that is made available.

This would leave Big Tech with only one likely practical path forward. Firms would need to drastically and dramatically reduce the amount of UGC (User Generated Content) that is submitted and publicly posted. All manner of postings — written, video, audio, prerecorded content and live streams, virtually everything that any user might want other users to see, would need to be curtailed. A tiny percentage compared with what is seen today might continue to be publicly surfaced after the required pre-moderation, but this would be a desert ghost town compared to today’s social media landscape.

There are some observers who upon reading this might think to themselves, “So what? To hell with social media! The Internet and the world will be better without it.” But this is fundamentally wrong. The ability of ordinary people to communicate with many others — without having to channel through traditional mass media gatekeepers — has been one of the most essential liberating aspects of the Internet. The appropriate responses to the abusive ways that some persons have chosen to use these capabilities do not include permitting governments to decimate a crucial aspect of the Internet’s empowerment of individuals.

Ultimately might governments expand their monitoring edicts to include email? Will attempts to ban VPNs become mainstream around the planet? There’s no reason to assume that governments demanding mass data surveillance would ultimately hesitate in any of these respects.

Of course, if this is what voters really want, it’s what their politicians will likely provide them. Possible alternatives that might help to limit some abuses — one suggestion at least worth discussing is requiring social media firms to confirm the identities of users posting to large groups before such postings are visible — may not be seriously considered. We shall see.

Unfortunately, most users of the Internet and social media are ill-informed about the realities of these situations. Most of what they are seeing on these topics is political rhetoric devoid of crucial technological contexts. They are purposely kept uninformed regarding the ramifications of the false “remedies” that some politicians and haters of Big Tech are spewing forth daily.

We are on the cusp of having major parts of our daily lives seriously disrupted by political demands that would wither away many of the services on the very sites that are so important to us all.

–Lauren–

How to Better Solve YouTube’s “Dislike Count” Problem

The controversy over the recently announced decision by YouTube to remove publicly viewable “Dislike” counts from all videos is continuing to grow. Many YT creators feel that the loss of a publicly viewable Like/Dislike ratio will be a serious detriment. I know that I consider that ratio useful.

There are some good arguments by Google/YouTube for this action, particularly relating to harassment campaigns targeting the Dislikes on specific videos. However, I believe that YouTube has gone too far in this instance, when a more nuanced approach would be preferable.

In particular, my view is that it is reasonable to remove the publicly viewable Dislike counts from videos by default, but that creators should be provided with an option to re-enable those counts on their specific videos (or on all of their videos) if they wish to do so.

With YouTube removing the counts by default, YouTube creators who are not aware of these issues will be automatically protected. But creators who feel that showing Dislike counts is good for them could opt to display them. Win-win!

–Lauren–

Apple Backdoors Itself

UPDATE (September 3, 2021): Apple has now announced that “based on feedback” they are delaying the launch of this project to “collect input and make improvements” before release.

– – –

Apple’s newly revealed plan to scan users’ Apple devices for photos and messages related to child abuse is actually fairly easy to explain from a high-level technical standpoint.

Apple has abandoned their “end-to-end” encrypted messaging promises. They’re gone. Poof! Flushed down the john. Because a communication system that supposedly is end-to-end encrypted — but has a backdoor built into user devices — is like being sold a beautiful car and discovering after the fact that it doesn’t have any engine. It’s fraudulent.

The depth of Apple’s betrayal of its users is not specifically in the context of dealing with child abuse — which we all agree is a very important issue indeed — but that by building any kind of backdoor mechanism into their devices they’ve opened the legal door to courts and other government entities around the world to make ever broader demands for secret, remote access to the data on your Apple phones and other devices. And even if you trust your government today with such power — imagine what a future government in whom you have less faith may do.

In essence, Apple has given away the game. It’s as if you went into a hospital to have your appendix removed, and when you awoke you learned that they also removed one of your kidneys and an eye. Surprise!

There is no general requirement that Apple (or other firms) provide end-to-end crypto in their products. But Apple has routinely proclaimed itself to be a bastion of users’ privacy, while simultaneously being highly critical of various other major firms’ privacy practices. 

That’s all just history now, a popped balloon. Apple hasn’t only jumped the shark, they’ve fallen into the water and are sinking like a stone to the bottom.

–Lauren–

Keep Governments Away from Social Media “Misinformation Control”

As the COVID “Delta” variant continues its spread around the globe, the Biden administration has deployed something of a basketball-style full-court press against misinformation on social media sites. That its intentions are laudable is evident and not at issue. Misinformation on social media and in other venues (such as various cable “news” channels), definitely play a major role in vaccine hesitancy — though it appears that political and peer allegiances play a significant role in this as well, even for persons who have accurate information about the available vaccines.

Yet good intentions by the administration do not necessarily always translate into optimum statements and actions, especially in an ecosystem as large and complex as social media. When President Biden recently asserted that Facebook is “killing people” (a statement that he later walked back) it raised many eyebrows both in the U.S. and internationally.

I implied above that the extent to which vaccine misinformation (as opposed to or in combination with other factors) is directly related to COVID infections and/or deaths is not a straightforward metric. But we can still certainly assert that Facebook has traditionally been an enormous — likely the largest — source of misinformation on social media. And it is also true, as Facebook strongly retorted in the wake of Biden’s original remark, that Facebook has been working to reduce COVID misinformation and increase the viewing of accurate disease and vaccine information on their platform. Other firms such as Twitter and Google have also been putting enormous resources toward misinformation control (and its subset of “disinformation” — which is misinformation being purposely disseminated with the knowledge that it is false).

But for those both inside and outside government who assert that these firms “aren’t doing enough” to control misinformation, there are technical realities that need to be fully understood. And key among these is this: There is no practical way to eliminate all misinformation from these platforms. It is fundamentally impossible without preventing ordinary users from posting content at all — at which point these platforms wouldn’t be social media any longer.

Even if it were possible for a human moderator (or humans in concert with automated scanning) to pre-moderate every single user posting before permitting them to be seen and/or shared publicly, differences in interpretation (“Is this statement in this post really misinformation?”), errors, and other factors would mean that some misinformation is bound to spread — and that can happen very quickly and in ways that would not necessarily be easily detected either by human moderators or by automated content scanning systems. But this is academic. Without drastically curtailing the amount of User Generated Content (UGC) being submitted to these platforms, such pre-moderation models are impractical.

Some other statements from the administration also triggered concerns. The administration appeared to suggest that the same misinformation standards should be applied by all social media firms — a concept that would obviously eliminate the ability of the Trust & Safety teams at these firms to make independent decisions on these matters. And while the administration denied that it was dictating to firms what content should be removed as misinformation, they did say that they were in frequent contact with firms about perceived misinformation. Exactly what that means is uncertain. The administration also said that a short list of “influencers” were responsible for most misinformation on social media — though it wasn’t really apparent what the administration would want firms to do with that list. Disable all associated accounts? Watch those accounts more closely for disinformation? I certainly don’t know what was meant.

But the fundamental nature of the dilemma is even more basic. For governments to become involved at all in social media firms’ decisions about misinformation is a classic slippery slope, for multiple reasons.

Even if government entities are only providing social media firms with “suggestions” or “pointers” to what they believe to be misinformation, the oversized influence that these could have on firms’ decisions cannot be overestimated, especially when some of these same governments have been threatening these same firms with antitrust and other actions.

Perhaps of even more concern, government involvement in misinformation content decisions could potentially undermine the currently very strong argument that these firms are not subject to First Amendment considerations, and so are able to make their own decisions about what content they will permit on their platforms. Loss of this crucial protection would be a big win for those politicians and groups who wish to prevent social media firms from removing hate speech and misinformation from their platforms. So ironically, government involvement in suggesting that particular content is misinformation could end up making it even more difficult for these firms to remove misinformation at all!

Even if you feel that the COVID crisis is reason enough to endorse government involvement in social media content takedowns, please consider for a moment the next steps. Today we’re talking about COVID misinformation. What sort of misinformation — there’s a lot out there! — will we be talking about tomorrow? Do we want the government urging content removal about various other kinds of misinformation? How do we even define misinformation in widely different subject areas?

And even if you agree with the current administration’s views on misinformation, how do you know that you will agree with the next administration’s views on these topics? If you want the current administration to have these powers, will you be agreeable to potentially a very different kind of administration having such powers in the future? The previous administration and the current one have vastly diverging views on a multitude of issues. We have every reason to expect at least some future administrations to follow this pattern.

The bottom line is clear. Even with the best of motives, governments should not be involved in content decisions involving misinformation on social media. Period.

–Lauren–

We Have Met the Ransomware Enemy, and It Is (Partly) Us!

Ransomware is currently a huge topic in the news. A crucial gasoline pipeline shuts down. A major meat processor is sidelined. It almost feels as if there are new announced ransomware attacks every few days, and there are certainly many such attacks that are never made public.

We see commentators claiming that ransomware attacks are the software equivalent of 9/11, and that perpetrators should be treated as terrorists. Over on one popular right-wing news channel, a commentator gave a literal “thumbs up” to the idea that ransomware perpetrators might be assassinated.

The Biden administration and others are suggesting that if Russia’s Putin isn’t responsible for these attacks, he at least must be giving his tacit approval to the ones apparently originating there. For his part, Putin is laughing off such ideas.

There clearly is political hay to be made from linking ransomware attacks to state actors, but it is certainly true that ransomware attacks can potentially have much the same devastating impacts on crucial infrastructure and operations as more “traditional” cyberattacks.

And while it is definitely possible for a destruction-oriented cyberattack to masquerade as a ransomware attack, it is also true that the vast majority of ransomware attacks appear to be aimed not at actually causing damage, but for the rather more prosaic purpose of extorting money from the targeted firms.

All this having been said, there is actually a much more alarming bottom line. The vast majority of these ransomware attacks are not terribly sophisticated in execution. They don’t need to depend on armies of top-tier black-hat hackers. They usually leverage well-known authentication weaknesses, such as corporate networks accessible without robust 2-factor authentication techniques, and/or firms’ reliance on outmoded firewall/VPN security models.

Too often, we see that a single compromised password gives attackers essentially unlimited access behind corporate firewalls, with predictably dire results.

The irony is that the means to avoid these kinds of attacks are already available — but too many firms just don’t want to make the efforts to deploy them. In effect, their systems are left largely exposed — and then there’s professed surprise when the crooks simply saunter in! There are hobbyist forums on the Net, having already implemented these security improvements, that are now actually better protected than many major corporations!

I’ve discussed the specifics many times in the past. The use of 2-factor (aka 2-step) authentication can make compromised username/password combinations far less useful to attackers. When FIDO/U2F security keys are properly deployed to provide this authentication, successful fraudulent logins tend rapidly toward nil.

Combining these security key models with “zero trust” authentication, such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), and security is even further enhanced, since no longer can an attacker simply penetrating a firewall or compromised VPN find themselves with largely unfettered access to targeted internal corporate resources.

These kinds of security tools are available immediately. There is no need to wait for government actions or admissions from Putin! And sooner rather than later, firms and institutions that continue to stall on deploying these kinds of security methodologies will likely find themselves answering ever more pointed questions from their stockholders or other stakeholders, demanding to know why these security improvements weren’t already made *before* these organizations were targeted by new highly publicized ransomware attacks!

–Lauren–

DeJoy Is Hell-Bent on Wrecking the Postal Service — and Maybe Your Life

While we’re all still reeling from the recent horrific, tragic. and utterly preventable incidents of mass shooting murders, inside the D.C. beltway today events are taking place that could put innumerable medically challenged Americans at deep risk — and the culprit is Louis DeJoy, the Postal Service (USPS) Postmaster General and Trump megadonor. 

His 10-year plan for destroying the USPS, by treating it like his former for-profit shipping logistics business rather than the SERVICE is was intended to be — was released today, along with a flurry of self-congratulatory official USPS tweets that immediately attracted massive negative replies, most of them demanding that DeJoy be removed from his position. Now. Right now!

I strongly concur with this sentiment.

Even as first class and other mail delays have already been terrifying postal customers dependent on the USPS for critical prescription medications and other crucial products, DeJoy’s plan envisions even longer mail delays — including additional days of delay for delivery of local first class mail, banning first class mail from air shipping, raising rates, cutting back on post office hours, and — well, you get the idea.

Fundamentally the plan is simple. Destroy the USPS via the “death by a thousand cuts” — leaving to slowly twist in the wind those businesses and individuals without the wherewithal to rely on much more expensive commercial carriers.

While President Biden has taken some initial steps regarding the USPS by appointing several new appointees to the USPS board of governors (who need to be confirmed by the Senate), and this could lead to the ability for the ultimate ousting of DeJoy (since only the board can fire him directly), we do not have the time for this process to play out.

Biden has apparently been reluctant to take the “nuclear option” of firing DeJoy’s supporters on the board — they can be fired “for cause” — but many observers assert that their complicity in this DeJoy plan to wreck USPS services would be cause enough.

One thing is for sure. The kinds of changes that DeJoy is pushing through would be expensive and time consuming to unwind later on. And in the meantime, everybody — businesses and ordinary people alike — will suffer greatly at DeJoy’s hands. 

President Biden should act immediately to take any and all legal steps to get DeJoy out of the USPS before DeJoy can do even more damage to us all.

–Lauren–