Holding Social Media Responsible: Time To Change Section 230?

I have long held that efforts to tamper with Section 230  from the Communications Decency Act of 1996 are dangerously misguided. It is this section that immunizes online service providers from liability for third-party content that they carry. I have also argued that attempts to mandate “age verification” for social media will spectacularly backfire in ominous ways for social media users in general, and will not actually protect children — and I continue to believe that age verification systems cannot achieve their stated goals and will cause dramatic collateral damage.

One of my key concerns in both of these cases is that they would over time cause major social media platforms to drastically curtail the third-party content that they host, eliminating as much as possible that would be considered in any way controversial, in an effort to avoid liability.

I still believe that this is true, that this would be the likely outcome of Section 230 being altered in any significant ways and/or widespread implementation of the sorts of age verification systems under discussion.

But I’m now wondering if this would necessarily be such a bad outcome, because the large social media platforms appear to have increasingly eliminated all pretense of social responsibility, making it likely that the damage they have done over the years through the spreading of misinformation, disinformation, racism, and all manner of other evils will only be exacerbated — become much, much worse — going forward.

Seeing billionaire Mark Zuckerberg today proclaiming nonchalantly that he’s making changes to Meta platforms (Facebook, Instagram, etc.) that will inevitably increase the level of harmful content — he essentially said that explicitly — is I believe a “jumping the shark” moment for all major social media.

I feel it is time to have a serious discussion regarding potential changes to Section 230 as it applies to large social media platforms, with an aim toward forcing them to take responsibility for the damage the content on their platforms causes to society, whether it is third-party content or their own.

I would also add — though this extends beyond the formal scope of Section 230 and social media — that firms who have deployed Generative AI systems (chatbots, AI Overviews, etc.) should be held responsible for damage done by misinformation and errors in the content that those systems generate and provide to users.

It is obvious that the major social media platforms are at best now providing only lip service to the concept of social responsibility, or are effectively abandoning it entirely, for their own political and financial expediency — and the situation is getting rapidly worse.

We must make it clear to these firms that they serve us, not the other way around. Changes to Section 230 as it applies to the large social media platforms may be the most practical method to convince the usually billionaire CEOs of these firms that our willingness to be victimized has come to an end.

–Lauren–

The Helpful Google Ombudsman (Who Doesn’t Exist)

I just had a good laugh. Someone asked me this morning how they could reach the “Google Ombudsman” for help with an account lockout issue. And I laughed not because their situation was funny, but because of the sad fact that I’ve been pushing for Google to establish an Ombudsman (or these days, often called Ombudsperson) role, for … well … decades. I’ve pushed from the outside. When I had the opportunity, I’ve pushed from the inside. Obviously, I never had any luck with this.

But I did get curious again today. For years, my essays on this topic ranked very high on Google Search. What about now?

Another laugh! I searched for:

google ombudsman

and a blog post of mine on this topic from 2009 is still on the first page of search results — 16 years later!

This was actually superseded by my more recent posts about this, such as 2017’s “Brief Thoughts on a Google Ombudsman and User Trust”:

https://lauren.vortex.com/2017/06/12/brief-thoughts-on-a-google-ombudsman-and-user-trust

But the story is still exactly the same as it was originally — Google has never been willing to budge on this issue, even as the need for such a role (or roles) has dramatically increased over the years, not just for issues related to account lockouts and other traditional Google user problems that cry out for valid escalation paths, but of course now related to the rapidly rising range of AI-related controversies.

The more things change, the more they stay the same.

Very sad.

–Lauren–

More Bipartisan Madness: Commerce Department Proposes Yet Another Insane Chinese Drone Ban That Could Cost Lives

I’ll use very simple words for these government officials: You ban Chinese drones, you’re putting U.S. lives at risk.

Congress with bipartisan support very recently passed what is effectively a ban on import of (and perhaps, but less likely, the use of existing) Chinese drones such as those from market leader DJI, taking effect in a year against the firm if DJI can’t convince a government agency to certify that they are not a security risk — and of course, how DJI is supposed to accomplish this isn’t spelled out.

So now it gets even worse. The U.S. Commerce Department is considering its own Chinese drone bans, and has opened a public comment period through early March.

The absolute bull-headed STUPIDITY of these bans is beyond belief. There is no evidence extent that DJI drones present a security risk — only theoretical. politically-motivated speculation from both political parties that make virtually no sense at all.

The organizations and businesses that depend on these drones — law enforcement, search and rescue, agriculture, utilities, and a long list of others, have not found practical alternatives to DJI drones in the vast majority of cases. DJI dominates the market because they make the highest quality drones at prices these entities can afford, and provides world class support for them.

The politics of this situation are beyond disgusting. Is it too much to hope that the Trump administration will be more reasonable about this? Yeah, probably not a good bet, but being more sensible than both parties in Congress — and the current administration — on this score is a very low bar at this point.

Here is the current official Commerce URL with their announcement. This was not easy for me to find — not a single media source I saw bothered to include this crucial information!

https://www.bis.gov/press-release/commerce-issues-advance-notice-proposed-rulemaking-secure-unmanned-aircraft-systems

Absolute insanity. -L

AI Is Dooming Google, but Not in the Way Its CEO Believes

According to reports, in a recent employee meeting, Google CEO Sundar said that (essentially) the entire focus of Google in 2025 will be AI and pushing it out to consumers in a “scrappy” way — with him referencing the early days of Google. This was when, I would note, their rush resulted in massive arrogance, and privacy problems at the firm were at a peak. Over the years these both were reduced — especially privacy issues where Google actually has become world class in terms of protecting users’ privacy (and security). Both could return to former terrible levels under Sundar’s deeply flawed AI approach.

With such a focus on AI and so much money between poured into it by Google, it is inevitable that other core Google services will ultimately suffer, and given Google’s notoriously deficient “customer service” when things go wrong — from account lockouts to a wide range of other problems — it’s only going to be getting worse. Word is that Google  teams not mostly devoted to AI are already suffering cutbacks. How long before Google decides that Gmail, etc. just aren’t worth keeping around anymore? Couldn’t happen? Think again.

Google’s AI continues to be an endless source of mediocrity and wrong, confused, and even utterly inane and nonsensical answers and false statements, and Google refuses to take responsibility for these and how they could negatively (sometimes even dangerously) impact users. This renders Google’s incredibly reckless pushes to embed AI deeply into Google Search (thanks to AI, decreasingly trustworthy), and the introduction of their new “AI Agents” (taking over web browsers on behalf of users — an enormous target for hackers and phishing attacks), both horrific risks for consumers.

Sundar would (I suspect) say that unless Google moves in this direction, Google is doomed. I believe that he and his executive team have it exactly backwards. Consumers do not want AI. The more they learn about it, the less they trust it or care for what it does in their day to day lives. They don’t want to pay for it. They don’t want it popping up in their faces constantly and being shoved down their throats. They certainly DO want Google to take 100% responsibility for what it does.

Sundar wants Google to be scrappy. We might as well delete that leading “s” — unfortunately, that’s Google’s likely fate, because his AI path will lead to Google’s almost certain doom one way or another. The exact timing and form of that doom cannot be accurately predicted right now, but billions of Google’s users will suffer in the process.

And one more thought. It’s been reported that apparently Sundar has now joined the exalted ranks of the billionaires. How that may or may not be affecting his thinking in these matters I’ll leave as an exercise for the reader.

Dark times for Google’s users, indeed.

–Lauren–

[What say you, Spock?] My Proposed Terminology to Describe Bypassing Social Media Face ID Age Verification Systems

All the talk now is about using AI-based mechanisms to authenticate social media users as being not underaged for access, through analysis of their faces on video feeds. The multitude of ways in which this could fail in both directions (declaring faces either older than they really are, or younger than they really are, not to mention how you determine from a face if someone is 15.5 or 16 years old when the minimum age required for access is 16) are far too many to even list here.

But given all of the attention, I feel that we need terminology to quickly describe the entire area of bypass techniques targeting these age verification/gating systems.

I propose the term:

BALOK

As in, “The 11-year-old easily baloked the system and gained quick access.”

or:

The free software was capable of baloking the ID portal within seconds to bypass the age restrictions.

BALOK is an acronym for:

Bypassing Age Locked Online Keys

Of course, fans of the original “Star Trek” already know what’s really going on.

Balok was an alien in the first season of “Star Trek” from an episode called “The Corbomite Maneuver”. In appearance he was a very young, vulnerable child. But in his audio and video communications with the Enterprise ship, he employed an artificial booming voice and what turned out to be a menacing appearing puppet to fool the Enterprise crew into fearing him.

The parallels with the current face ID age verification systems are obvious.

Children will be baloking the social media age gating systems in a myriad number of ways, while adults who were supposed to have access will be blocked due to both face analysis errors and technology access problems. Not everyone uses smartphones with cameras to access social media, and many people rightly fear sending video images of their faces to these or other firms due to justifiable concerns about potential abuses.

I anticipate both freeware baloking software and baloking as a (largely free) service. Kids will band together in groups to develop new baloking techniques. They are extremely resourceful when it comes to these areas, more so than the vast majority of adults.

Balok knew that it was easy to fool his potential adversaries with a faked persona. The ingenuity of kids today pretty much guarantees that their own efforts to balok the social media firms, and in essence the politicians who pushed age blocks in the first place, will be even more successful in the real world.

–Lauren–

Insanity: Drone Hysteria and Bans Put Lives at Risk

Is this happening around the world, or is it only here in the USA that everything appears to be going totally nutso? Seemingly all at once, politicians of both parties look and sound like they’ve given up all pretense of being educated human beings and are behaving like infantile idiots with political agendas. Oh boy, what a mix.

Logic? Forget about it! Pandering to fear and nonsense? That’s the way to win elections!

We don’t have much clearer examples of this than two simultaneous situations involving drones.

First, as you probably know by now, there has been a hysterical panic in New Jersey and surrounding areas about supposed swarms of mysterious “drones”. All evidence to date is that this is entirely nonsense, fed by clickbait social media, opportunistic mainstream media, and politicians in both parties out to seize an opportunity to score political points from people’s ignorance about technical realities.

So far, other than legal hobby and commercial drones that are routinely in the air — there are something over a million licensed in the U.S. — people have been reporting as “mystery drones” various shaky, blurry images of stars, helicopters, and airplanes (maybe the green and red flashing lights and the white strobe lights give them away, huh?), plus all manner of other completely ordinary stuff that most people just never notice most of the time. And you have politicians like Democratic Senator Chuck Schumer irresponsibly trying to ram a new surveillance bill through the Senate to protect us from this nonexistent threat — Republican Senator Rand Paul blocked him. When we have to depend on Rand Paul to be the sensible one, we must be in The Twilight Zone.

Politicians in both parties including Trump have been making all manner of claims feeding the drone hysteria — based on nothing real, and calling for shooting down the supposed “drones” if they “can’t” be identified, putting the lives of pilots and passengers on ordinary plane flights at risk. People have been shining lasers at planes — a criminal offense — again risking pilots and passengers.

The whole thing is totally nuts. It’s reminiscent of a notorious panic in Bellingham, Washington in 1954, when people started noticing ordinary manufacturing defects in car windshields and mass hysteria broke out with people fearing it was nuclear radiation or some other kind of attack. I’m not kidding. Google it.

The drone panic wasn’t helped by the sluggish reaction of government agencies to speak clearly to the issue, but the fact that there were no collisions between supposed drones and other air traffic spoke volumes about the ridiculous nature of the entire situation. The FAA has now issued some temporary drone flight restrictions in various areas of New Jersey, to try calm things down even further. But if agencies had gotten ahead of this issue early on, the information vacuum might not have been filled with so much ridiculous nonsense.

One of the best new videos I’ve seen explaining the current drone hysteria is:

https://www.youtube.com/watch?v=MAWCIfs0ER4

I strongly recommend that it be widely viewed.

Meanwhile, the political hysteria over Chinese drone maker DJI’s drones as a claimed national security risk — with absolutely no evidence of this being presented — has reached a bizarre and dangerous inflection point in Congress.

DJI holds a very large majority of the U.S. drone market not just for hobbyists but in the absolutely crucial areas of law enforcement, search and rescue, other public safety groups, agriculture, utilities, and many other areas of society. The reason is simple — these groups have not found practical competing products from other manufacturers that meet the quality, reliability, and service support levels that DJI routinely provides. DJI drones are used in myriad areas to directly support the protection of human lives and property, keeping critical infrastructure operating, and an almost endless list more.

Still, some politicians in both parties keep screaming at the top of their lungs that DJI’s drones must be banned, no matter how many lives are lost or hurt in the process. Again, there is zero evidence that has ever been presented that these drones are a security risk, and DJI has bent over backwards to demonstrate that they do not threaten security. But trying to logically argue with politicians who have their own agendas (e.g., by pointing out to them that a foreign power could just buy satellite surveillance photos — they don’t need to “spy” through commercial drones!) is like debating a moldy sponge. All you get for your efforts is a rotting odor.

It was thought that the current defense appropriation bill might push through a DJI ban. This was likely to include DJI drones, cameras, audio equipment, and other products — either import bans alone, or more likely import bans combined with telling the FCC to prohibit their use of U.S. radio frequencies, which could also in theory — but probably not in practice — block use of these DJI products already received and in routine use in the USA.

Instead, with so many crucial public safety and other groups opposed to the ban, the final language puts off a ban for a year, and says to avoid the ban DJI must get an appropriate national security agency to certify that their products are not a security risk.

Proving a negative is always, uh, challenging. But worse — and this is something straight out of Putin’s Russia — the language does not say which national security agency should do this or require any of them to do it. Franz Kafka would love this. Putin would smile.

It’s possible that the next administration will be more receptive to logical arguments about why DJI products should not be banned, and if the ban moves forward DJI is virtually certain to litigate through the courts, as well they should.

But the sheer irresponsibility of politicians wanting to ban such crucial products based on zero evidence and a lot of wild-eyed political posturing is nothing short of disgusting and irresponsible.

So here we are. Blurry photos of stars and planes are being touted as terror drones, with politicians more than happy to latch onto the panic for their own purposes. Actual drones crucial to a vast array of industries and to saving lives are at risk of being banned by politicians who scream “national security” without evidence.

Yeah, I don’t know about the rest of the world, but here in the USA it sure looks like we’ve fallen off the deep end of sanity.

–Lauren–

 

Australia’s Under-16 Social Media Ban Is Doomed

Ah yes. Poodle skirts and bobby socks.  Jimmie Rodgers and the Everly Brothers. Around the world, there seems to be a collective longing for a rose-tinted, 20/20 hindsight, fantasy view of “the good old days” of the 1950s, before those damned computers starting infiltrating so much of our lives.  And social media bans have become the means by which governments hope to force children off their phones and back to sometimes rather violent competitive sports and other ultraviolet light suffused outdoor activities.

It won’t work. The latest example of this yearning for the past is in Australia where, with very broad public support, the government just pushed through (in about a week!) a ban on children under 16 using social media. There are no exceptions for anyone with current accounts. There are no exceptions to allow parents to permit their children to use social media if the parents determine that’s best for their own children. The ban likely will include all of the major social media platforms except (for now at least) YouTube, which is widely used in schools.

Clearly, there have been enormously tragic incidents involving children who were, for example, bullied or otherwise abused over social media. But there are also many examples of the positive benefits of social media helping children who were being abused by family members, for whom access to assistance over social media was crucial. And many examples of isolated children for whom social media has been an important benefit to their mental health. And children who have created educational outreach and other extremely positive projects via social media.

I’m not a sociologist. I’ll leave it to the experts in that and related fields to explain the complex and sometimes competing aspects of social media and young persons.

But I am a technologist. And as such, my view is that Australia’s ban almost certainly won’t work and will end up doing far more damage than the status quo before the law, as it creates a culture of false hopes, push back, and circumvention.

Like all social media age gating laws, the Australian law would require ALL users of social media to be age verified. That’s how you (in theory) block the children. The law wisely does not penalize parents or children who circumvent the law, instead depending on financial fines against the social media firms. And at the very last minute, a provision was apparently added that prohibits requiring use of government credentials for identification. This was a positive change, because as I’ve discussed many times, age-verification based on government credentials for websites access would lead almost inevitably to broad tracking of Internet usage by the government in much the style that users in China are subjected to today.

So how would Australia do age verification for this law? The law is planned to take effect a year from now, and an age verification trial is supposed to take place before then. Most frequently discussed are AI-based (oh boy, here we go …) techniques to analyze users’ faces, online behavior patterns, types of content they access … and so on. 

It doesn’t take much imagination to create a long list of ways that such techniques not only have errors in both directions (passing users who were too young, blocking users who were actually old enough) — even in the absence of circumvention techniques. E.g., how do you determine if a child is 15 and a half years old or 16 years old from their face? Uh huh. Hell, I’ve known people who were 30 and had faces that looked like they were 15.

But even beyond the mumbo jumbo of supposed AI-based solutions, the list of relatively straightforward circumvention techniques seems almost endless. And anyone who thinks that children won’t figure this stuff out are in for a rude awakening.

One obvious problem for the law will be VPNs. Unless the Australian government plans to detect/ban VPN usage — which would have enormous negative consequences — simply creating accounts on these social media platforms that appear to be coming from countries other than Australia is an obvious circumvention methodology.

Attempting to ban children from social media won’t work. It will make a complicated situation even worse, and it technically is impractical without creating a hellscape of government-verified identity Internet usage tracking for all users of all ages — and even then circumvention techniques would still exist.

The desire to eliminate the negative consequences of social media is a laudable one. And there’s much that could be done by social media firms to better prevent abuse of their platforms, especially when children are targeted for such abuse. 

But age-based bans are a “feel good” effort that will create new harms and will fail. They should be firmly rejected.

–Lauren–

DOJ’s Proposed Antitrust “Remedies” Against Google Would Be a Disaster

Despite my continuing differences with various specific aspects of Google operations that I feel could be straightforwardly improved to the benefit of their users, I can’t emphasize enough what an utter disaster the DOJ proposed Google antitrust “remedies” would be for the privacy and security of their users and consumers more broadly, and for the overall usability of these crucial services as well.

Google privacy and security standards and teams are world class, and I have enormous trust in them. Keeping email and the many other Google services that billions of people rely on in their everyday lives safe and secure is an enormously complex and continually evolving effort, and key to this — as well as making sure that users’ data entrusted to Google is not put at risk by firms with less stringent standards than Google — is the integrated nature of the Chrome browser, Android, and other aspects of Google services. Even with this integration, it’s a monumental task.

Breaking these aspects of Google apart in the name of supposed “competition” — that would actually only make most non-technical users’ interactions with tech more confusing and complicated, just what consumers clearly don’t want — would be a gargantuan mistake that consumers would unfortunately end up paying for in a myriad number of ways for many years.

Google is far from perfect, but DOJ seems hell-bent on pushing an antitrust agenda in this case that would make consumers’ lives far worse instead of better. Whether that’s a result of DOJ ignoring the technical realities in play or simply not really understanding them, it’s the wrong path and would lead to a very bad place indeed for all of us.

–Lauren–

DOJ vs. Google: Users have the most to lose

Despite my ongoing concerns over various of the directions that current management has been taking Google over recent years, I must state that I agree with Google that the kinds of radical antitrust “remedies” — and “radical” is the appropriate word — apparently being contemplated by DOJ, would almost certainly be a disaster for ordinary users’ privacy, security, and overall ability to interact with many aspects of related technologies that they depend on every day.

These systems are difficult enough to keep reasonably user friendly and secure as it is — and they certainly should continue to be improved in those areas. But what DOJ is reportedly considering would be an enormous step backwards and consumers would be the ultimate victims of such an approach.

–Lauren–

“I Am the Very Model of a Google AI Overview”

“I Am the Very Model of a Google AI Overview”
Lauren Weinstein

To the tune of “I Am the Very Model of a Modern Major-General” (with apologies to Gilbert & Sullivan)

– – –

I am the very model of a Google AI Overview.
I know what you’ll be searching for,
At least an hour ahead of you.

My answers aren’t always right,
In fact they’re often quite a brawl.
But hey we’re Google and you’re here,
So that’s the way the chips will fall.

We really don’t like those blue links,
They’re so old-fashioned we agree.
Why bother sites with viewers,
When users can just come here to me?

Of course some sites may suffer,
And yeah that’s a bit tragic to see.
But while we aren’t evil,
Face the facts it’s all about money!

Now if your Google search results,
No longer seem of quality,
It’s not our fault,
The problem is,
Your queries are just all lousy.

So welcome to my AI world,
An LLM can’t think things through,
I am the very model of a Google AI Overview.

– – –

–Lauren–

Generative AI Is Being Rammed Down Our Throats

The technical term for what’s happening now with Artificial Intelligence, especially generative AI, is NUTS. I mean it’s not just Google, but Microsoft too, with OpenAI’s ChatGPT. The firms are just pouring out half-baked AI systems and trying to basically ram them down our throats whether we want them or not, by embedding them into everything they can, including in irresponsible or even potentially hazardous ways. And it’s all in the search of profits at our expense.

I’ll talk specifically about Google Search shortly, but so much of this crazy stuff is being deployed. Microsoft wants to record everything you do on a PC through an AI system. Both Google and Microsoft want to listen in on your personal phone calls with AI. YouTube is absolutely flooded with low quality AI junk videos, making it ever harder to find accurate, useful videos.

Google is now pushing their AI “Help me write” feature which feeds your text into their AI from all over the place including in many Chrome browser context menus, where in some cases they’ve replaced the standard text UNDO command with “Help me write”. And Help me write is so easy to trigger accidentally that you not only could end up feeding personal or business proprietary information into the AI, but also to the human AI trainers who Google notes can also see this kind of data.

OK, now about Google Search. For quite some time now many people have been noticing a decline in the quality of Google search results — and keep in mind that Google does the overwhelmingly vast percentage of searches by Internet users. So Google has recently been rolling out to regular Google Search results what they call AI Overviews, and these are AI-generated answers to what seem like most queries now, that can push all the actual site links — the sites from which Google AI presumably pulled the data to formulate those answers — actually push them so far down the page that few users will ever see them, and this potentially starves those sites that provided that data from getting the user views they need to stay up and running.

Some of the AI overview answers have links but often they’re dim and obscure and almost impossible to even see unless you have perfect 20/20 vision and very young eyes. On top of that many of these AI Overview answers are just banal, stupid, and often just confused or plain wrong, mixing up accurate and inaccurate information, sometimes in ways that could actually be unsafe, for example when they’re wrong about health-related questions. This is all very different from the kinds of top of page answers that Google has shown for straightforward search queries like math questions or definitions of words or when was a particular film released that they’ve provided for some time now.

These AI Overview answers are showing up all over the place and like I said, much of the time their quality is abysmal. Now of course if you’re not knowledgeable about a subject you’re asking about, you might assume a misleading or wrong AI Overview answer is correct, and since Google has now made it less likely that you’ll scroll down the page to find and visit sites that may have accurate information, it’s a real mess. There are some tricks with Google Search URLs that I’ve seen to bypass some of this for now, but Google could disable those at any time.

What’s really needed is a way for users to turn all of this generative AI content completely off until such a time, if ever, that a given user decides they want to turn it on again. Or better yet, these AI features should be ENTIRELY opt-in, that is, turned off UNTIL you decide you want to use them in the first place.

So once again we see that fears of super intelligent AIs wiping out humanity are not what we should be worried about right now. What we need to be concerned about are the ways that Big Tech AI companies are hell-bent on forcing generative AI systems into all aspects of our private lives in ways that are often unwanted, confusing, irresponsible, or even worse. And the way things seem to be going right now, there’s no indication that these firms are interested in how we feel about all this.

And that’s not going to change so long as we’re willing to continue using their products without making it clear to them that we won’t indefinitely tolerate their push to stuff generative AI systems into our lives whether we want them there or not.

–Lauren–

Evil

Every day it becomes ever clearer. All the talk of super-intelligent AI destroying humanity was and is nonsense. It’s the CEOs running the Big Tech firms pushing AI into every aspect of our lives in increasingly irresponsible and dangerous ways who are the villains in this saga, not the machines themselves. The machines are just tools. Like hammers, they can be used to build a home or smash in a skull.

For evil, you have to look to humans and their corporate greed.

–Lauren–

The Nightmare of Google Account Recovery Failures

Let me be very clear about why I am, frankly, so angry with Google over their Account Recovery failures.

I have on numerous occasions directly proposed to Google a variety of significant improvements to their current Account Recovery processes.

While their existing procedures successfully recover many accounts daily, they tend to fail disproportionally for innocent non-techie users and other marginalized groups like seniors and more — users who still are dependent on Google for email and data storage in a world where other support options (like telephone and non-email billing and support) are being rapidly marginalized by firms as cost-cutting measures.

These are often users who barely understand how to use these systems that they’ve in many cases essentially been coerced into using. When they’re locked out, they can lose everything — email, photos, and other personal data crucial to their lives.

I have on multiple occasions proposed specific improvements to Google’s procedures that could be invoked optionally by users who desperately needed access to their accounts that were locked out without good cause, and methods by which Google would have the means for cost recovery of the additional (and typically not extensive) additional support measures required to accomplish this.

My proposals have never received serious consideration by Google. I always receive the same responses. “We recover lots of accounts and that’s good enough.” “Nobody is forced to use Google.” “People who don’t properly maintain their recovery addresses and phone numbers have nobody but themselves to blame.”

Unspoken and unwritten but clearly part of the underlying message: “We just don’t care about those categories of users. Hopefully they’ll go away and never come back.”

It’s a travesty. I’ll keep trying, because hope springs eternal, and I’m too old now to give up on even apparently hopeless causes. Silly me, I guess. Take care.

–Lauren–

Google and Seniors

Google refuses to create a specific role for someone to oversee the issues of older users, who depend on Google for so many things but so often get the shaft and lose everything when something goes wrong with their accounts. Google should AT LEAST (I still think the role is crucial), be providing focused help resources and a recurring (at least monthly) blog to help this class of users (“Google for Seniors”, “Google Seniors Blog”).

This would all be specifically oriented toward helping these users deal with the kinds of Google Account and other Google problems that so often disproportionately affect this group.

This would be good for these users (who Google unreasonably and devastatingly considers to be an unimportant segment of their user base) and frankly good for Google’s PR in a highly challenging and toxic political environment.

I’m so tired of having so many people in this category approach me for help with account and other Google issues because they never understood the existing Google resources that, frankly, are written for a different level of tech expertise and understanding.

I have more detailed thoughts on this if anyone cares. No, I’m not holding my breath on this one.

–Lauren–

About Google and Location Privacy

You may have seen a lot of press over the last few days about Google moving location data by default to be on-device (e.g., your phone) rather than stored centrally (and encrypted if you choose to store it centrally), and how this will help prevent abuses of broad “geofence” warrants that law enforcement uses to get broad data about devices in a particular specified area.

These are all positive moves by Google, but keep in mind that Google has long provided users with control over their location history — how long it’s kept, the ability for users to delete it manually, whether it’s kept at all, etc.

But when is the last time your mobile carrier offered you any control over the detailed data they collect on your devices’ movements? If you’re like most people, the answer seems to be never. And while cellular tracking may not usually be as precise as GPS, these days it can be remarkably accurate.

One wonders why there’s all this talk about Google, when the mobile carriers are collecting so much location data that users seem to have no control over at all, data that is of similar interest to law enforcement for mass geofence warrants, one might assume.

Think about it.

–Lauren–

Google’s Inactive Account Policy and Phishing Attacks Concerns

As you may know, Google has recently begun a protocol to delete inactive Google accounts, with email notices going out to the account and recovery addresses in advance as a warning.

Leaving aside for the moment the issue that so many people who have lost track of accounts probably have no recovery address specified (or an old one that no longer reaches them), there’s another serious problem.

A few days ago I received a legitimate Google email about an older Google account of mine that I haven’t used in some time. I was able to quickly reauthenticate it and bring it back to active status.

However, this may be the first situation (there may be earlier ones, but I can’t think of any offhand) where Google is actively “out of the blue” soliciting people to log into their accounts (and typically, older accounts that I suspect are more likely not to have 2-factor authentication enabled, for example).

This is creating an ideal template for phishing attacks.

We’ve long strongly urged users not to respond to emailed efforts to get them to provide their login credentials when they have not taken any specific actions that would trigger the need for logging in again — and of course this is a very common phishing technique (“You need to verify your account — click here.” “Your password is expiring — click here.”, etc.)

Unfortunately, this is essentially the form of the Google “reactivate your account” email notice. And for ordinary busy users who may get confused to see one of these pop into their inbox suddenly, they may either ignore them thinking that they are a phishing attack (and so ultimately lose their account and data), or may fall victim to similar appearing phishes leveraging the fact that Google is now sending these out.

I’ve already seen such a phish, claiming to be Google prompting with a link for a login to a supposedly inactive account. So this scenario is already occurring. The format looked good, and it was forged to appear to be from the same Google address as used for the legitimate Google inactive account notification emails.  Even the internal headers had been forged to make it appear to be from  Google. The top level “Received from” header line IP address was wrong of course, but how many people would notice this or even look at the headers to see this in the first place?

I can think of some ways to help mitigate these risks, but as this stands right now I am definitely very concerned. 

–Lauren–

In Support of Google’s Progress On AI Content Choice and Control

Last February, in:

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

I suggested expansion of the existing Robots Exclusion Protocol (e.g. “robots.txt”) as a path toward helping provide websites and creators control over how their contents are used by AI systems.

Shortly thereafter, Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.

While it’s true that adherence to robots.txt (or related webpage Meta tags — also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.

This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we’re only at the beginning of a long road, and asking for a wide range of stakeholder inputs.

I believe of particular importance is Google’s desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.

Also of note is Google’s endorsement of the excellent “AI taxonomy” concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.

Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google’s continuing progress in these regards.

–Lauren–

Radio Transcript: Google Passkeys and Google Account Recovery Concerns

As per requests, this is a transcript of my national network radio report earlier this week regarding Google passkeys and Google account recovery concerns.

 – – –

So there really isn’t enough time tonight to get into any real details on this but I think it’s important that folks at least know what’s going on if this pops up in front of them. Various firms now are moving to eliminate passwords on accounts by using a technology called “passkeys” which bind account authentication to specific devices rather than depending on passwords.

And theoretically passkeys aren’t a bad idea, most of us know the problems with passwords when they’re forgotten or stolen, used for account phishing — all sorts of problems. And I myself have called for moving away from passwords. But as we say so often, the devil is in the details, and I’m not happy with Google’s passkey implementation as it stands right now. Google is aggressively pushing their users currently, asking if they want to move to a passwordless experience. And I’m choosing not to accept that option right now, and while the choice is certainly up to each individual, I myself don’t recommend using it at this stage.

Without getting too technical, one of my concerns is that anyone who can authenticate a device that has Google passkeys enabled on it, will have full access to those Google accounts without having to have any additional information — not even an additional authentication step. And this means that if — as is incredibly common — someone with a weak PIN for example on their smartphone, loses that device or it’s stolen, again, happens all the time, and the PIN was eavesdropped or guessed, those passkeys could let a culprit have full access to the associated Google accounts and lock out the rightful owner from those accounts before they had a chance to take any actions to prevent it.

And I’ve been discussing my concerns about this with Google, and their view — to use my words — is that they consider this to be the greatest good for the greatest number of people — for whom it will be a security enhancement. The problem is that Google has a long history of mainly being concerned about the majority, and leaving behind vast numbers of users who may represent a small percentage but still number in the millions or more. And these often are the same people who through no fault of their own get locked out of their Google accounts, lose access to their email on Gmail, photos, other data, and frankly Google’s account recovery systems and lack of useful customer service in these regards have long been a serious problem.

So I really don’t want to see the same often nontechnical folks who may have had problems with Google accounts before, to be potentially subjected to a NEW way to lose access to their accounts. Again it’s absolutely an individual decision, but for now I’m going to skip using Google passkeys and that’s my current personal recommendation.

–Lauren–

Google is making their weak, flawed passkey system the default login method — I urge you NOT to use it!

Google continues to push ahead with its ill-advised scheme to force passkeys on users who do not understand their risks, and will try push all users into this flawed system starting imminently.

In my discussions with Google on this matter (I have chatted multiple times with the Googler in charge of this), they have admitted that their implementation, by depending completely on device authentication security which for many users is extremely weak, will put many users at risk of their Google accounts being compromised. However, they feel that overall this will be an improvement for users who have strong authentication on their devices.

And as for ordinary people who already are left behind by Google when something goes wrong? They’ll get the shaft again. Google has ALWAYS operated on this basis — if you don’t fit into their majority silos, they just don’t care. Another way for Google users to get locked out of their accounts and lose all their data, with no useful help from Google.

With Google’s deficient passkey system implementation — they refuse to consider an additional authentication layer for protection — anyone who has authenticated access to your device (that includes the creep that watched you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same basis. And when you’re locked out, don’t complain to Google, because they’ll just say that you’re not the user that they’re interested in — if they respond to you at all, that is.

“Thank you for choosing Google.”

–Lauren–