About Google and Location Privacy

You may have seen a lot of press over the last few days about Google moving location data by default to be on-device (e.g., your phone) rather than stored centrally (and encrypted if you choose to store it centrally), and how this will help prevent abuses of broad “geofence” warrants that law enforcement uses to get broad data about devices in a particular specified area.

These are all positive moves by Google, but keep in mind that Google has long provided users with control over their location history — how long it’s kept, the ability for users to delete it manually, whether it’s kept at all, etc.

But when is the last time your mobile carrier offered you any control over the detailed data they collect on your devices’ movements? If you’re like most people, the answer seems to be never. And while cellular tracking may not usually be as precise as GPS, these days it can be remarkably accurate.

One wonders why there’s all this talk about Google, when the mobile carriers are collecting so much location data that users seem to have no control over at all, data that is of similar interest to law enforcement for mass geofence warrants, one might assume.

Think about it.

–Lauren–

Google’s Inactive Account Policy and Phishing Attacks Concerns

As you may know, Google has recently begun a protocol to delete inactive Google accounts, with email notices going out to the account and recovery addresses in advance as a warning.

Leaving aside for the moment the issue that so many people who have lost track of accounts probably have no recovery address specified (or an old one that no longer reaches them), there’s another serious problem.

A few days ago I received a legitimate Google email about an older Google account of mine that I haven’t used in some time. I was able to quickly reauthenticate it and bring it back to active status.

However, this may be the first situation (there may be earlier ones, but I can’t think of any offhand) where Google is actively “out of the blue” soliciting people to log into their accounts (and typically, older accounts that I suspect are more likely not to have 2-factor authentication enabled, for example).

This is creating an ideal template for phishing attacks.

We’ve long strongly urged users not to respond to emailed efforts to get them to provide their login credentials when they have not taken any specific actions that would trigger the need for logging in again — and of course this is a very common phishing technique (“You need to verify your account — click here.” “Your password is expiring — click here.”, etc.)

Unfortunately, this is essentially the form of the Google “reactivate your account” email notice. And for ordinary busy users who may get confused to see one of these pop into their inbox suddenly, they may either ignore them thinking that they are a phishing attack (and so ultimately lose their account and data), or may fall victim to similar appearing phishes leveraging the fact that Google is now sending these out.

I’ve already seen such a phish, claiming to be Google prompting with a link for a login to a supposedly inactive account. So this scenario is already occurring. The format looked good, and it was forged to appear to be from the same Google address as used for the legitimate Google inactive account notification emails.  Even the internal headers had been forged to make it appear to be from  Google. The top level “Received from” header line IP address was wrong of course, but how many people would notice this or even look at the headers to see this in the first place?

I can think of some ways to help mitigate these risks, but as this stands right now I am definitely very concerned. 

–Lauren–

In Support of Google’s Progress On AI Content Choice and Control

Last February, in:

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

I suggested expansion of the existing Robots Exclusion Protocol (e.g. “robots.txt”) as a path toward helping provide websites and creators control over how their contents are used by AI systems.

Shortly thereafter, Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.

While it’s true that adherence to robots.txt (or related webpage Meta tags — also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.

This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we’re only at the beginning of a long road, and asking for a wide range of stakeholder inputs.

I believe of particular importance is Google’s desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.

Also of note is Google’s endorsement of the excellent “AI taxonomy” concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.

Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google’s continuing progress in these regards.

–Lauren–

Radio Transcript: Google Passkeys and Google Account Recovery Concerns

As per requests, this is a transcript of my national network radio report earlier this week regarding Google passkeys and Google account recovery concerns.

 – – –

So there really isn’t enough time tonight to get into any real details on this but I think it’s important that folks at least know what’s going on if this pops up in front of them. Various firms now are moving to eliminate passwords on accounts by using a technology called “passkeys” which bind account authentication to specific devices rather than depending on passwords.

And theoretically passkeys aren’t a bad idea, most of us know the problems with passwords when they’re forgotten or stolen, used for account phishing — all sorts of problems. And I myself have called for moving away from passwords. But as we say so often, the devil is in the details, and I’m not happy with Google’s passkey implementation as it stands right now. Google is aggressively pushing their users currently, asking if they want to move to a passwordless experience. And I’m choosing not to accept that option right now, and while the choice is certainly up to each individual, I myself don’t recommend using it at this stage.

Without getting too technical, one of my concerns is that anyone who can authenticate a device that has Google passkeys enabled on it, will have full access to those Google accounts without having to have any additional information — not even an additional authentication step. And this means that if — as is incredibly common — someone with a weak PIN for example on their smartphone, loses that device or it’s stolen, again, happens all the time, and the PIN was eavesdropped or guessed, those passkeys could let a culprit have full access to the associated Google accounts and lock out the rightful owner from those accounts before they had a chance to take any actions to prevent it.

And I’ve been discussing my concerns about this with Google, and their view — to use my words — is that they consider this to be the greatest good for the greatest number of people — for whom it will be a security enhancement. The problem is that Google has a long history of mainly being concerned about the majority, and leaving behind vast numbers of users who may represent a small percentage but still number in the millions or more. And these often are the same people who through no fault of their own get locked out of their Google accounts, lose access to their email on Gmail, photos, other data, and frankly Google’s account recovery systems and lack of useful customer service in these regards have long been a serious problem.

So I really don’t want to see the same often nontechnical folks who may have had problems with Google accounts before, to be potentially subjected to a NEW way to lose access to their accounts. Again it’s absolutely an individual decision, but for now I’m going to skip using Google passkeys and that’s my current personal recommendation.

–Lauren–

Google is making their weak, flawed passkey system the default login method — I urge you NOT to use it!

Google continues to push ahead with its ill-advised scheme to force passkeys on users who do not understand their risks, and will try push all users into this flawed system starting imminently.

In my discussions with Google on this matter (I have chatted multiple times with the Googler in charge of this), they have admitted that their implementation, by depending completely on device authentication security which for many users is extremely weak, will put many users at risk of their Google accounts being compromised. However, they feel that overall this will be an improvement for users who have strong authentication on their devices.

And as for ordinary people who already are left behind by Google when something goes wrong? They’ll get the shaft again. Google has ALWAYS operated on this basis — if you don’t fit into their majority silos, they just don’t care. Another way for Google users to get locked out of their accounts and lose all their data, with no useful help from Google.

With Google’s deficient passkey system implementation — they refuse to consider an additional authentication layer for protection — anyone who has authenticated access to your device (that includes the creep that watched you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same basis. And when you’re locked out, don’t complain to Google, because they’ll just say that you’re not the user that they’re interested in — if they respond to you at all, that is.

“Thank you for choosing Google.”

–Lauren–

UK Passage of Online Safety Bill to Create Chinese-Style Internet Tracking and Censorship — Coming Soon to U.S.?

In the 2005 film “V for Vendetta” a fictional UK government has turned into a tightly censored, tracked, and controlled hellscape, with technology used to control citizens in every way possible. The UK has now taken a massive step toward making that horror a reality, with the passage of likely the most misguided legislation in the country since the Norman invasion of 1066.

I won’t detail their Online Safety Bill here — you can find endless references by searching yourself — but the vast, blurry, nebulous, misguided rules for “protecting children from ‘harmful’ content” — a slippery slope bad enough on its own, quickly expanded into a Chinese Internet style virtual steel collar for every UK resident, chained to the government in every aspect of their online lives.

The mandated social media platform ID age verification requirements, which will ultimately require the showing of government IDs for access to sites, alone will create the opportunity for virtually every action of every user of the Internet in the UK to be tracked by the government and its minions in ever expanding ways over time.

Be careful what sites you visit or what you ask or say on them. In China, you can simply vanish under such circumstances. And in the UK? Similar disappearances coming soon, perhaps, as every site you visit, no matter the topic related to business, medical concerns, or other aspects of your family’s private and personal life, will ultimately be linked to you in government databases.

VERY similar *bipartisan* legislative efforts are taking place here in the U.S., though the U.S. court system is creating additional hurdles for their perpetrators here, at least for the moment. For now.

While some activists and legislators spend their time ranting about Internet advertising, governments around the world are working to turn the Internet into a pervasive tool for tracking your every online move and thought, permanently linked to your government IDs.

We’ve seen it in Communist China. Now we see it in so-called democracies.

Open your eyes — while you still can. 

–Lauren–

The Potential Privacy Problems With YouTube’s Family Plan “Suggestion Leakage”

I love YouTube. I consider it to be a wonder of the world for an array of reasons. Its scale is — well, the technical term is “mindbogglingly enormous.” I subscribe to YouTube Premium (primarily to obliterate the ads — I don’t use ad blockers), and as far as I’m concerned it’s the best streaming service value on the planet. If I had to choose one streaming service only — it would be YouTube Premium, undoubtedly. I have something approaching 7000 favorited videos on YT, and I sometimes imagine that there’s a whole cluster in a dark corner of a Google data center singularly devoted to managing my giganormous watch history.

Does YT have problems? Yup. Some YT creators have to deal with inappropriate strikes and takedowns — I’ve tried to assist a bunch of these users with these sorts of disruptions over the years. Some people complain of bad video suggestions pushing them in dark directions — though this has never been an issue for me — the suggestions I get are generally great, though I do take time to train the algorithm as to what I do and don’t like. If you just use YT not-logged in and/or don’t train, you’ll probably get less favorable results. Basically that’s your choice.

Obviously, no technology is perfect, and at YT’s scale even if only a tiny fraction of suggestions are problematic, it can still be a large number in absolute terms. That’s life. I still love YouTube.

There’s an oddity though with YT that I think is worth mentioning. It’s not a big concern in the scheme of things, but it really shouldn’t be happening.

This relates to the YouTube Premium “Family Plan” that lets you bundle multiple separate Google accounts in a household together so that they all have the benefits of Premium, at a better price than each subscribing to Premium separately. Under FP, each of the associated accounts is free of ads, etc., but is still separate — with their own YT play history, etc. — and can view different content simultaneously (normally, a Premium account can only view content on one device at a time). 

But a strange thing can happen with Family Plan. The videos being watched by one account on the plan can affect the suggestions on other accounts on the plan, even though they should be entirely separate in this particular respect.

This is most often noticed when a topic starts to pop up in the suggestions for one FP member that are totally odd for them — for example, a subject that they never view videos about. And it turns out — if the members of the FP compare notes — that some other member of the plan was watching videos on that topic, and the YT videos/channels being watched by FP member A are showing up in the suggestions for FP member B. And so on.

Most of the time this isn’t a serious concern, and can even be interesting in terms of surfacing new topics. But of course there are intrinsic privacy considerations as well. It isn’t good policy for the YT viewing habits of different family members to be intermingled in that way, without their specifically asking for such sharing. The potential family problems that could occur as a result in some cases are fairly obvious.

This has been going on with Family Plan for years, and I’ve brought this up with Google/YT myself in the past. And the responses I’ve always gotten back have either been that “it can’t happen” or “it shouldn’t happen” and … that’s pretty much where it’s been left hanging each time.

But it does still happen (I have a new report just this morning) and yeah, it really shouldn’t.

Again, not an enormous problem in the scheme of things, but not trivial either, and it’s something that definitely should be fixed.

–Lauren–

Artificial Intelligence at the Crossroads

Suddenly there seems to be an enormous amount of political, regulatory, and legal activity regarding AI, especially generative AI. Much of this is uncharacteristically bipartisan in nature.

The reasons are clear. The big AI firms are largely depending on their traditional access to public website data as the justification for their use of such data for their AI training and generative AI systems.

This is a strong possibility that this argument will ultimately fail miserably, if not under current laws then under new laws and regulations likely to be pushed through around the world, quite likely in a rushed manner that will have an array of negative collateral effects that could actually end up hurting many ordinary people.

Google for example notes that they have long had access to public website data for Search.

Absolutely true. The problem is that generative AI is wholly different in terms of its data usage than anything that has ever come before.

For example, ordinary Search provides a direct value back to sites through search results pages links — something that the current Google CEO has said Google wants to de-emphasize (colloquially, “the ten blue links”) in favor of providing “answers”.

Since the dawn of Internet search sites many years ago, search results links have long represented a usually reasonable fair exchange for public websites, with robots.txt (Robots Exclusion Protocol) available for relatively fine-grained access control that can be specified by the websites themselves, and which at least the major search firms generally have honored.

But generative AI answers eliminate the need for links or other “easy to see” references. Even if “Google it!” or other forms of “more information” links are available related to generative AI answers at any AI firm’s site, few users will bother to view them.

The result is that by and large, today’s generative AI systems by their very nature return essentially nothing of value to the sites that provide the raw knowledge, data, and other information that powers AI language/learning models. 

And typically, generative AI answers (leaving aside rampant inaccuracy problems for now) are like high school term papers that haven’t even included sufficient (if any) inline footnotes and comprehensive bibliographies with links.

A very quick “F” grade at many schools.

I have proposed extending robots.txt to help deal with some of these AI issues — and Google also very recently proposed discussions around this area.

Giving Creators and Websites Control Over Generative AI:
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

But ultimately, the “take — and give back virtually nothing in return” modality of many AI systems inevitably leads toward enormous pushback. And I do not sense that the firms involved fully understand the cliff that they’re running towards in a competitive rush to push out AI systems long before they or the world at large are ready for them.

These firms can either grasp the nettle themselves and rethink the problematic aspects of their current AI methodologies, or continue their current course and face the high probability that governmental and public concerns will result in major restrictions to their AI projects — restrictions that may seriously negatively impact their operations and hobble positive AI applications for users around the world long into the future.

–Lauren–

Thoughts on AI Regulation

Greetings. The excellent essay:

https://circleid.com/posts/20230628-the-eu-ai-act-a-critical-assessment

(by Anthony Rutkowski) serves to crystallize many of my concerns about the current rush toward specific approaches to AI regulation before the issues are even minimally understood, and why I am so concerned about negative collateral damage in these kinds of regulatory efforts.

There is widespread agreement that regulation of AI is necessary, both from within and outside the industry itself, but as you’ve probably grown tired of seeing me write, “the devil is in the details”. Poorly drafted and rushed AI regulation could easily do damage above and beyond the realistic concerns (that is, the genuine, non-sci-fi concerns) about AI itself.

It’s understandable that the very rapid deployments of AI systems — particularly generative AI — are creating escalating anxiety regarding an array of related real world controversies, an emotion that in many cases I obviously share.

However, as so often happens when governments and technologies intersect, the potential for rushed and poorly coordinated actions severely risks making these situations much worse rather than better, and that’s an outcome to be avoided. Given what’s at stake, it’s an outcome to be avoided at all costs.

I don’t have any magic wands of course, but in future posts I will discuss aspects of what I hope are practical paths forward in these matters. I realize that there is a great deal of concern (and hype) about these issues, and I welcome your questions. I will endeavor to answer them as best I can. 

–Lauren–

A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

This post could get very long very quickly, so instead I’m going to endeavor to keep this introductory discussion brief, with an array of crucial details to come later. 

In my recent posts:

An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

https://lauren.vortex.com/2023/05/17/google-account-recovery-failure-sad

and:

Potentially Serious Issues with Google’s Announced Inactive Accounts Deletion Policy

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

(and frankly, in many related postings over many years in this blog and other venues), I discussed the continuing problems of honest Google users being locked out of their Google accounts, often with a total and permanent loss of all their data (Gmail, photos, Drive files, etc.) that they entrusted to Google.

These lockouts can occur for an array of reasons — problems with login credentials, third-party hacking of accounts including (but not limited to) malware, Google believing that violations of its Terms of Service have occurred, and many other events.

Each of these is an entire complex topic area that I won’t detail in this post.

But the bottom line is that many Google users who feel that they have done nothing wrong find themselves locked out of their accounts — and crucially — their data at Google, and are unable to successfully navigate the existing largely automated account recovery procedures that Google currently provides.

Generally speaking, once a user who has been locked out of a Google account reaches this point, they are, to use the vernacular, SOL — there’s no way to proceed. Usually their data, no matter how important and precious to their lives, is lost to them forever.

To be sure, sometimes the failure to recover a Google account is rooted in the failure of users to provide or keep up to date the recovery information that Google requests for the very purpose of easing account recovery paths.

But the reality is that many users forget about keeping these current, or are reluctant to provide phone numbers and/or alternative email addresses (if they even have them) in the first place. That’s just the way it is.

And ultimately, even at Google’s enormous scale of users who use its services for free, there is something inherently wrong about honest users who lose so much of their lives — that Google has encouraged them to entrust to Google — when an unrecovered account lockout occurs.

Over and over again — in a manner reminiscent of the film “Groundhog Day” — desperate Google users who have been locked out have asked me if there was someone they could pay to help them? Isn’t there some way, they ask, for Google to do a deeper dive into the circumstances of their lockouts, the users’ official government IDs for proof, and other methods to authenticate them back into their Google accounts — as can be done at virtually all financial institutions and most other firms.

Right now the answer is no.

But the answer should be and could be yes, if Google made the decision — by no means a trivial one! — to provide the means for such “enhanced recovery services” for Google Accounts, which in some cases (e.g., when a user is indeed at fault as the root cause of the lockout) could be chargeable (that is, paid) services as a means to help defray the additional costs involved.

This is a very complicated area with an array of trade-offs and nuances. It’s likely to be highly controversial. 

But as far as I’m concerned, the status quo of how Google account recoveries work (or fail) is no longer acceptable, especially in the current regulatory and political environment.

In future discussions, I will detail my thinking of how “enhanced recovery” for Google accounts could be accomplished in practice, and how it would benefit Google’s users, Google itself, and the wider global community that depends upon Google.

Take care, all.

–Lauren–

An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

UPDATE: 24 May 2023: A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

– – –

All, I am doing something in this post that I’ve never done before over these many years. I’m going to share with you an example of what Google account recovery failure means to the people involved, and this is by no means the worst such case I’ve seen — not even close, unfortunately.

I mentioned yesterday in my other venues how (for many years) I’ve routinely tried to informally help people with Google account recovery issues, because the process can be so difficult for many persons to navigate, and frequently fails. The announcement yesterday of Google’s inactive account deletion policy that I blogged about then:

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

triggered an onslaught of concerns that for a time made my blog inaccessible and even delayed inbound and outbound email processing.

I’m going to include below most of the text from messages I received today from one of my readers about a specific Google account recovery failure — and how that’s affecting a nearly 90-year-old woman. I’ll be anonymizing the message texts, and I’ve of course received permission from the sender to show you this.

Unfortunately, this example is all too familiar for me. It is very much typical of the Google account recovery problems that Google users, so dependent on Google in their daily lives, bring to my attention in the hope that I might be able to help.

I’ve been discussing these issues with Google for many years. I’ve suggested “ombudspeople”, account escalation and appeal procedures that ordinary people could understand, and many other concepts. They’ve all basically hit the brick wall of Google suggesting that at their scale, nothing can be done about such “edge” cases. I disagree. In today’s regulatory and political environment, these edge cases matter more than ever. And I will continue to do what I can, as ineffective as these efforts often turn out to be. -L

 – – – Message Text Begins – – –

Hi Lauren, I tried to help a lovely neighbor (the quintessential “little old lady”) recently with her attempt to recover her legacy gmail account. We ultimately gave up and she created a second, new account instead. She had been using the original account forever (15+ years) and it was created so long ago that she didn’t need to provide any “recovery” contacts at that time (or she may have used a landline phone number that’s long been cancelled now). For at least the last decade, she was just using the stored password to login and check her email. When her ancient iPad finally died, she tried to add the gmail account to her new replacement iPad. However, she couldn’t remember the password in order to login. Because the old device had changed and she couldn’t remember the password and there was no back channel recovery method for her account, there was no way to login. I don’t know if you’ve ever attempted to contact a human being at google tech support, but it’s pretty much impossible. They also don’t seem to have an exception mechanism for cases like this. So she had to abandon hopes of viewing the google photos of her (now deceased) beloved pet, her contacts, her email subscriptions, reminders, calendar entries, etc.

I understand the desire to keep accounts secure and the need to reduce customer support expenses for a free service with millions of users. But it’s also frustrating for end users when there’s no way to appeal/review/reconsider the automated lockout. She’s nearly 90 years old, so I find it remarkable that she’s able to use the iPad. But it’s difficult to know what to say to someone like this when she asks “what can we do now” and there are no options…

I recognize that there are many different kinds of google users. Some folks (like journalists, dissidents, whistleblowers, political candidates, human rights workers, etc.) need maximum security for their communications (and their contacts). In these cases, it makes sense to employ multifactor authentication, end-to-end encryption, one time passwords, and other exceptional privacy and security features. However, there are a great many average users who find these additional steps difficult, frustrating and (esp. in the case of elderly people who aren’t necessarily very technology savvy), sometimes bewildering. It’s tough to explain that your treasured photos can’t be retrieved because you’re not the sort of user that google had in mind. Not everyone is a millennial digital native who finds this all obvious.

 – – – Message Text Ends – – –

–Lauren–

Potentially Serious Issues with Google’s Announced Inactive Accounts Deletion Policy

UPDATE: 24 May 2023: A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

UPDATE (17 May 2023): An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

– – –

Google has announced that inactive personal Google accounts will be removed and all of their data deleted after two years, after a number of emailed reminders:

https://blog.google/technology/safety-security/updating-our-inactive-account-policies/

Right now I’m only going to thumbnail some potentially serious issues with this policy. They deserve a much more detailed examination that I will address when I can, but there are many associated concerns that Google did not address publicly, and these matter enormously because Google is so much a part of so many people’s lives around the planet.

– Will account names become available for reissuing after an account is deleted? Google policy historically has been that used account names are permanently barred from reissuing. I am assuming that this is still the case, but I’d appreciate confirmation. This would be the best policy from a security standpoint, of course.

UPDATE (17 May 2023): I’ve now received confirmation from Google that account names will not be reissued after these account deletions. Good.

– Given the many ways that users can lose access to their Google accounts, including password and other authentication confusion, lockouts in error due to location login issues, and many other possibilities related to authentication and account recovery complexities, I am not convinced that deleting user data after two years of inactivity is a wise policy. While keeping the data around forever is impractical, two years seems very short from a legal standpoint in an array of ways, even if routine user access is blocked after two years of inactivity. While many users locked out of their accounts simply create new accounts, many still have crucial data in those “trapped” accounts, and most users unfortunately do not use the “Takeout” facilities Google provides to download data while accounts are still active.

 – The impact on user photos and public YouTube videos are especially of concern. Many popular and important YouTube videos are associated with very old accounts that are likely effectively abandoned. The loss of these public videos from YouTube could be devastating.

UPDATE (17 May 2023): While their original announcement yesterday said that YouTube videos would be deleted when accounts were deleted under this policy, Google has responded to concerns about YouTube videos and has now made a statement that “At this time, we do not plan to delete accounts with YouTube videos.” Obviously this leaves some related open questions for the future, but is still great news.

– Many people use Google accounts for logging in to non-Google sites via federated login (“Login with Google”) mechanisms. While Google says these logins will continue to constitute activity, many of these accounts are likely fairly old and their associated users may not have used them for anything directly on Google for years (including reading emails). If they also have not been logging on to those third party sites for extended periods, when they do try again they’re likely to be quite upset to find their Google accounts necessary for access have been deleted.

I could go on but for now I just wanted to point out a few of the complex negative ramifications of Google’s policy in this regard, irrespective of their assertion that they’re meeting “industry standards” related to account retention and deletion. 

As it stands, I predict that a great many people are going to lose an enormous amount of data due to this Google policy — data that in many cases is very important to them, and in the case of YouTube, often important to the entire world.

–Lauren–

How Google Broke Chrome Bookmarks Sync

UPDATE (15 May 2023): And … about 48 hours after this original post, bookmarks starting successfully syncing in full to my tablet, after months of failing totally (despite my many best efforts and every sync trick I know). Coincidence? Could be. But I’ll say “Thanks Google!” anyway. 

– – – – – –

Greetings. Recently I asked around for suggestions to help figure out why (after trying all the obvious techniques) I could no longer get my Chrome bookmarks to sync to my primary Android 13 tablet.

Now, courtesy of a gracious #Mastodon user who pointed me at this recent article, I have the answer as to the why. But there’s no apparent fix. Bookmark sync is now broken for power users in significant ways:

https://www.androidpolice.com/google-chrome-bookmark-sync-limit/

In brief, Google appears to have imposed (either purposefully or not) an undocumented limit to the number of bookmarks permitted to be synced between devices. If you exceed that limit, NO bookmarks appear to usually sync — you can end up with no bookmarks at all on most affected devices.

In my case, my Android 13 phone is still syncing all bookmarks correctly, while my tablet has no bookmarks, and shows the “count limit exceeded” error in chrome://sync-internals that the above article notes.

The article suggests that the new undocumented limit is 100K for desktops and 20K for mobile devices. It turns out that I have just over 57K bookmarks currently, so why the limit is exceeded on the tablet and not on the phone is a mystery. But having ZERO synced bookmarks on the tablet is a real problem.

Yeah, there are third party bookmark managers and ways to create bookmark files that could be viewed statically, but the whole point of Chrome bookmark sync is keeping things up to date across all devices. This needs to work!

And if you feel that 57K bookmarks is a lot of bookmarks — you’re right. But I’ve been using Chrome since the first day of public availability, and my bookmarks are the road maps to my use of the Net. For them to just suddenly stop working this way on a key device is a significant problem.

I’d appreciate some official word from Google regarding what’s going on about this. Have they established new “secret” limits? Is this some sort of bug? (The error message suggests not.) Please let me know, Google. You know how to reach me. Thanks. 

–Lauren–

Big Tech Needs to Vastly Improve Their Public Communications — or Potentially Face a Political Train Wreck Over AI (and More)

In several of my past recent posts:

The “AI Crisis”: Who Is Responsible?
https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsible

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare
https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

and others in various venues, I have expressed concerns over the “perfect storm” that is now circling “Big Tech” from both sides of the political spectrum, with both Republicans and Democrats proposing (sometimes jointly, sometimes in completely opposing respects) “solutions” to various Internet-related issues — with some of these issues being real, and others being unrealistically hyped.

The latest flash point is AI — Artificial Intelligence — especially what’s called generative AI — publicly seen mainly as so-called AI chatbots.

I’m not going to repeat the specifics of my discussions on these various topics here, except in one respect.

For many (!) years I have asserted that these Big Tech firms (notably Google, but the others as well to one degree or another) have been negligently deficient in their public communications, failing to adequately assure that ordinary non-technical people — and the politicians that they elect — understand the true nature of these technologies.

This means both the positive and negative aspects of tech. But the important point is that the public needs to understand the reality of these systems, and not be misguided by misinformation and often politically-biased disinformation that fill the information vacuum left by these firms, often out of a misguided and self-destructive fear of so-called “Streisand Effects”, which the firms are afraid will occur if they mention these issues in any depth.

It is clear that such fears have done continuing damage to these firms over the years, while robust public communications and public education — not looking down at people, but helping them to understand! — could have instead done enormous good.

I’ve long called for the hiring of “ombudspersons” or liaisons — or whatever you want to call them — to fill these important, particular communications roles. These need to be dedicated roles for this purpose.

The situation has become so acute that it may now be necessary to have roles specific to AI-related public communications to help avoid the worst of the looming public relations and political catastrophes, that could decimate the positive aspects of these systems, and over time seriously damage the firms themselves.

But far more importantly, it’s society at large that will inevitably suffer when politics and fear win out over a true understanding of these technologies — how they actually impact our world in a range of ways — again, both positive and negative, both now and into the future.

The firms need to do this now. Right now. All of the greatest engineering in the world will not save them (and us!) if their abject public communications failures continue as they have to date.

–Lauren–

The “AI Crisis”: Who Is Responsible?

There is a sense of gathering crisis revolving around Artificial Intelligence today — not just AI itself but also the public’s and governments’ reactions to AI — particularly generative AI.

Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.

I find much more blame — and the related central problem of the moment — with some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.

While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments’ reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well — these both come together with any new technology) to the world.

By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance — and limitations! — of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.

If this doesn’t start changing for the better immediately, today’s controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future. 

–Lauren–

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare

The new Utah Internet ID age laws signed today — and what other states and the feds are moving toward in the same realm — will destroy social media and much else of the Internet as we know it.

Vast numbers of people will refuse to participate in any government ID-based scheme for age verification, no matter how secure and compartmented it is claimed to be (e.g. through third-party verifiers).

Many persons, rightly concerned about basic privacy rights, already use different names and specify different birthdays on different sites, to avoid being subjected to horrific problems in the case of data breaches, and to avoid being tracked across sites discussing unrelated topics.

These government moves are clear steps on the way toward creating a Chinese-style Internet where every individual’s Internet usage is tracked and monitored by the government, creating a vast and continuous climate of fear, oppression, and government control.

–Lauren–

Giving Creators and Websites Control Over Generative AI

Seemingly overnight, the Internet is awash with controversies over Generative Artificial Intelligence (GAI) systems, and their potential positive and negative impacts on the Net and the world at large.

It also seems very clear that unless we (for once!) get ahead of the potential problems with this new technology that seem to be rushing toward us like a freight train, there could be some very tough times ahead for creators, websites, and ordinary Internet users around the world.

I’m not writing a tutorial here on GAI, but very briefly it’s not the kind of “backend” AI systems with which most of us are more familiar, used for research and modeling, sorting the order of search results and suggestions, and even the kinds of generally useful very brief “answers” we see as (for example) Google Knowledge Panels, featured snippets, or short Google Assistant answers (and the similar features of other firms’ products).

GAI is very different, because it creates (and this is a greatly simplified explanation) what appears to be (at least in theory) completely *new* content, based on its algorithms and the data on which it has been trained.

GAI can be applied to text, audio, imagery, video — pretty much everything we’ve come to associate with the Net. And already, serious problems are emerging — not necessarily unexpected at this early stage, but ones that we must start dealing with now or risk a maelstrom later.

GAI chatbots have been found to spew racist and other hateful garbage. The long-form answers and essays that are the stock-in-trade of many GAI systems can be beautifully written, appear knowledgeable and authoritative — but still be riddled with utterly incorrect information. This can be a hassle indeed even with purely technical articles that have had to be withdrawn as a result, but can get downright scary when they involve, as in one recent case, an article on men’s health issues.

There are more problems. GAI can easily create “fake” pornography targeting individuals. It can be used to simulate people’s voices for a range of nefarious purposes — or even potentially just to simulate the voices of professional voice actors without their permission.

Eventually, the kind of scenario imagined in the 1981 film “Looker” — where actors once scanned could be completely emulated by (what we’d now call) GAI systems — could actually come to pass. We’re getting quite close to this already in the film industry and the world of so-called deepfakes — the latter potentially carrying enormous risks for disinformation and political abuse.

All of this tends to point us mainly in one direction: How GAI is trained.

In many cases, the answer is that websites are crawled and their data used for GAI purposes, without the explicit permission of the creators of that data or the sites hosting it.

Since the beginning of Search on the Internet, there has been something of a largely unwritten agreement. To wit: Search engines spider and index sites to provide lists of search results to users, and in return those search engines refer users back to those original sites where they can get more information and find other associated content of interest.

GAI in Search runs the risk of disrupting this model in major ways. Because by presenting what appear to be largely original long-form essays and detailed answers to user search queries, the probability of users ever visiting those sites that (often unknowingly) provided the GAI training data, even when links are present, is likely to drop precipitously. Even with links back provided by the GAI answers, why are users going to bother visiting those sites that provided the data to the GAIs, if the GAIs have already completely answered those users’ questions?

Complicating this even further is that the outputs of some GAI systems appear to frequently include largely or even completely intact (or slightly reworded) stretches of text, elements of imagery, and other data that the GAI presents as if they were wholly original.

Creators and websites should be able to choose if and how they wish their data to be incorporated into GAI systems. 

Accomplishing this will be a complex undertaking, likely involving both technical and legislative aspects in order to be even reasonably effective, and will almost certainly always be a moving target as GAI systems advance.

But a logical starting point could be expansion of the existing Internet Robots Exclusion Protocol (REP — e.g. robots.txt, meta tags, etc.) currently used to express website preferences regarding search indexing and associated functions. While the REP is not universally adhered to today, major sites usually do follow these directives.

Indeed, even defining GAI-related directives for REP will be enormously challenging, but this could get the ball rolling at least.

We need to immediately start the process of formulating the control methodologies for what training data Generative Artificial Intelligence systems are permitted to use, and the manners in which they do so. Failure to begin considering these issues risks enormous backlash against these systems going forward, which could render many of their potential benefits moot, to the detriment of everyone.

–Lauren–

2023 and Social Media’s Winds of Change

Greetings. The last hours and minutes of 2022 are ticking off, and we’re all being drawn inexorably into the new year and even deeper into the 21st century.

In my previous post of early October — Social Media Is Probably Doomed — I discussed various issues that call into question the ability of social media as we’ve known it to continue for much longer. Since then we’ve seen the massive chaos at Twitter when Musk took over, the rapid rise of distributed social media ecosystem Mastodon, and an array of other new confounding factors that make this analysis notably more complex and less deterministic. 

It’s perhaps interesting to note that only a year ago, pretty much nobody had predicted that Elon Musk would — voluntarily, single-mindedly, and over such a short period of time — have reinvented himself as a pariah to a large segment of his customers and the public at large, and be in a position to remake Twitter in the image of the very worst that social media can offer.

The lessons that we can draw from this are many, beyond the obvious ones such as that dramatic, abrupt changes in the tech world — and broader society — should be considered more the norm than the exception, especially in our current toxic political environment.

And it’s important to note that no technology — nor the persons who develop, deploy, operate, or use it — is immune from such disruptions.

This includes Mastodon of course. And while the distributed nature of this ecosystem perhaps provides some additional buffering from sudden changes that more centralized services usually do without, that does not suggest invulnerability to many of the same kinds of problems plaguing other social media, despite best intentions.

And this is definitely not to assert that blindly attempting to resist changes is the proper course. In fact, *not* being willing to appropriately evolve with a massive growth in the quantity of users — especially as increasingly more nontechnically-oriented persons arrive — is likely lethal to a social media ecosystem in the long run.

As we stand on the cusp of 2023, there is immense potential in Mastodon and other distributed social media models. But there are also enormous risks — fear of change being among the most prominent and potentially negatively impactful of these.

Given all that’s happening, I suspect that this coming year will be a crucial turning point for social medial in many ways — both technical and nontechnical in scope.

We can try to hold back the winds of change in these regards, or we can endeavor to harness them for the good of all. That, my friends, is not the choice of technology itself, it is solely up to us.

All the best to you and yours for a great 2023. Happy New Year!

–Lauren–

Social Media Is Probably Doomed

UPDATE (31 December 2022): 2023 and Social Media’s Winds of Change

– – – – – –

Social media as we’ve known it is probably doomed. Whether a decline in social media would on balance be good or bad for society I’ll leave to another discussion, but the handwriting is on the wall for a major decline in social media overall.

As with most predictions, the timing and other details will surface in coming months and years, but the overall shape of things to come is not terribly difficult to visualize.

The fundamental problem is also clear enough. A vast range of entities at state, federal, and international levels are in the process of enacting, invoking, or otherwise planning a range of regulatory and other legal mandates that would apply to social media firms — with many of these requirements being in direct and total opposition to each other.

The most likely outcome from putting these firms “between a rock and hard place” will be a drastic reduction of social media services provided, resulting in a massive decrease in ordinary persons’ ability to communicate publicly, rather than the increase that various social media critics have been anticipating.

Let’s very briefly review just some of the factors in the mix:

The political Right in the U.S. generally wants public postings to stay up, even if they contain racist or other hate speech or misinformation/disinformation. This is the outline of the push from states like Texas and Florida. Meanwhile, the Left and other states like California want more of the same sort of postings taken down even faster than they are now. Unless you can somehow provide different feeds on a posting by posting basis to users in different states (and what of VPN usage from other areas?), this creates an impossible situation.

Both the Left and Right hate Section 230, but for opposite reasons, relating to my point just above. Even the Biden White House has this wrong, arguing that cutting back 230 protections would force social media firms to more tightly moderate content, when in reality tampering with 230 would make hosting most UGC (User Generated Content) far too risky.

Elon Musk has proposed that Twitter carry any postings that aren’t explicitly illegal or condoning violence. This suggests an increase in the kind of hate speech and disinformation that not only drives away many users, but also tends to cause enormous problems for potential advertisers and network infrastructure providers, who usually do not want to be associated with such materials. And then of course there’s the EU — which has its own requirements (much more robust than in the U.S.) for dealing with hate speech and misinformation/disinformation.

There are calls to strip Internet users of all anonymity, to require use of real names (tied to official IDs, perhaps through some third party mechanisms) based on the theory that this would reduce hate speech and other attack speech. Yet studies have shown that such abhorrent speech continues to flower even when real names are used, while forcing real names causes already marginalized persons and groups to be even further disadvantaged, often in dangerous ways. Is there a middle ground on this? Perhaps requiring IDs be known to a third party (in case of abuse) before posting to large numbers of persons is permitted, but still permitting the use of pseudonyms for those postings? Maybe, but it seems like a long shot. 

Concerns over posting of terrorist content, live streaming of shootings, and other nightmarish postings have increased calls for pre-moderation of content before it goes public. But at the massive scale of the large social media firms, it’s impossible to see how this could be practical, for a whole range of reasons, unless the amount of content permitted from the public were drastically reduced.

And this is just a partial list. 

For social media to have any real value and practicality, it can’t operate on a reasonable basis when every state, every country may demand a different and conflicting set of rules. While there are certainly some politicians and leaders who do understand these issues in considerable depth, many others don’t worry about whether their technical demands are practical or what the collateral damage would be, only whether they’re good for votes come the next election.

And now we reach that part of this little essay where I’m expected to announce my preferred solution to this set of problems. Well dear readers, I’ve got nothing for you. I don’t see any practical solutions for these dilemmas. The issues are in direct conflict and opposition, and there is no obvious route toward their reconciliation or harmonization. 

So I can do little more here than push the needle into the red zone, sound the storm warnings, and try to point out that the paths we’re taking — absent some almost unimaginable changes in the current patterns — are rocketing us rapidly toward a world of social media that will likely briefly flare brightly and then go dark, like an incandescent light bulb at the end of its life, turned on just one too many times.

This analogy isn’t perfect of course, and there will continue to be some forms of social media under any circumstances. But the expected experience seems most likely to become increasingly constrained over time, along with all other aspects of publicly accessible user-provided materials — the incredible shrinking content.

As I said earlier, nobody knows how long this process will take. It won’t happen overnight. But we’ll have taken the path into this wilderness of our own free will, eyes wide open.

Please don’t forget to turn off the lights on your way out.

–Lauren–

How to Fix Google’s Gmail Political Spam Bypass Plan

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

– – – – – –

Recently in Google’s Horrible Plan to Flood Your Gmail with Political Garbage I discussed Google’s plan to permit “official” political emails to bypass Gmail spam filters, with users able to opt-out from this bypass only on a sender-by-sender basis as political emails arrive. So as new “official” political senders proliferate, this will be a continuing unwanted exercise for most Gmail users.

The Federal Election Commission has now posted a draft decision that effectively gives Google a go ahead for this plan (UPDATE: 11 August 2022: The FEC has now officially approved the plan). The large number of comments received by the FEC regarding this proposal were overwhelmingly negative (it was difficult to find any positive comments at all), but the FEC is only ruling on the technical question of whether such a plan would represent prohibited in-kind political contributions.

My view is that Gmail users should be able to opt-out of this entire political spam bypass plan if that is their choice. Political emails would in that case continue going into those individual users’ spam folders to the same extent that they do now.

My specific recommendation:

The first time that a political email arrives for a Gmail user that would bypass spam filtering under the Google plan, the Gmail user would be presented with a modal query with words to this effect (and yes, wording this properly will be nontrivial):

Do you want official political emails to arrive in your Gmail inbox rather than any of them going to your spam folder, unless you indicate otherwise regarding specific political email senders? You can change this choice at any time in Gmail Settings.
(TELL ME MORE)
YES
NO

There is no “default” answer to this query. Users must choose either YES or NO to proceed (with the TELL ME MORE choice branching off to an explanatory help page).

This is a matter of showing respect to Gmail users. The political parties do not own Gmail users’ inboxes, but users who are concerned about missing political emails that might otherwise go to the spam folder would be able to participate in this program, while other users would not be forced into participation against their wills.

Of course this will not satisfy some politicians who incorrectly assume that so much political email ends up in spam due to a claimed political bias against them by Google. In fact, Google applies no political bias at all to Gmail — so much political email ends up in spam precisely because that’s where most Gmail users want it to be.

Google is between the proverbial rock and a hard place on this matter, but I’m asking Google to side with their users. I’d prefer that the Gmail political spam bypass plan not be deployed at all, but if it’s going to happen than let’s give Google’s users a choice to participate or not, right up front.

It’s the Googley thing to do.

–Lauren–