Insanity: Drone Hysteria and Bans Put Lives at Risk

Is this happening around the world, or is it only here in the USA that everything appears to be going totally nutso? Seemingly all at once, politicians of both parties look and sound like they’ve given up all pretense of being educated human beings and are behaving like infantile idiots with political agendas. Oh boy, what a mix.

Logic? Forget about it! Pandering to fear and nonsense? That’s the way to win elections!

We don’t have much clearer examples of this than two simultaneous situations involving drones.

First, as you probably know by now, there has been a hysterical panic in New Jersey and surrounding areas about supposed swarms of mysterious “drones”. All evidence to date is that this is entirely nonsense, fed by clickbait social media, opportunistic mainstream media, and politicians in both parties out to seize an opportunity to score political points from people’s ignorance about technical realities.

So far, other than legal hobby and commercial drones that are routinely in the air — there are something over a million licensed in the U.S. — people have been reporting as “mystery drones” various shaky, blurry images of stars, helicopters, and airplanes (maybe the green and red flashing lights and the white strobe lights give them away, huh?), plus all manner of other completely ordinary stuff that most people just never notice most of the time. And you have politicians like Democratic Senator Chuck Schumer irresponsibly trying to ram a new surveillance bill through the Senate to protect us from this nonexistent threat — Republican Senator Rand Paul blocked him. When we have to depend on Rand Paul to be the sensible one, we must be in The Twilight Zone.

Politicians in both parties including Trump have been making all manner of claims feeding the drone hysteria — based on nothing real, and calling for shooting down the supposed “drones” if they “can’t” be identified, putting the lives of pilots and passengers on ordinary plane flights at risk. People have been shining lasers at planes — a criminal offense — again risking pilots and passengers.

The whole thing is totally nuts. It’s reminiscent of a notorious panic in Bellingham, Washington in 1954, when people started noticing ordinary manufacturing defects in car windshields and mass hysteria broke out with people fearing it was nuclear radiation or some other kind of attack. I’m not kidding. Google it.

The drone panic wasn’t helped by the sluggish reaction of government agencies to speak clearly to the issue, but the fact that there were no collisions between supposed drones and other air traffic spoke volumes about the ridiculous nature of the entire situation. The FAA has now issued some temporary drone flight restrictions in various areas of New Jersey, to try calm things down even further. But if agencies had gotten ahead of this issue early on, the information vacuum might not have been filled with so much ridiculous nonsense.

One of the best new videos I’ve seen explaining the current drone hysteria is:

https://www.youtube.com/watch?v=MAWCIfs0ER4

I strongly recommend that it be widely viewed.

Meanwhile, the political hysteria over Chinese drone maker DJI’s drones as a claimed national security risk — with absolutely no evidence of this being presented — has reached a bizarre and dangerous inflection point in Congress.

DJI holds a very large majority of the U.S. drone market not just for hobbyists but in the absolutely crucial areas of law enforcement, search and rescue, other public safety groups, agriculture, utilities, and many other areas of society. The reason is simple — these groups have not found practical competing products from other manufacturers that meet the quality, reliability, and service support levels that DJI routinely provides. DJI drones are used in myriad areas to directly support the protection of human lives and property, keeping critical infrastructure operating, and an almost endless list more.

Still, some politicians in both parties keep screaming at the top of their lungs that DJI’s drones must be banned, no matter how many lives are lost or hurt in the process. Again, there is zero evidence that has ever been presented that these drones are a security risk, and DJI has bent over backwards to demonstrate that they do not threaten security. But trying to logically argue with politicians who have their own agendas (e.g., by pointing out to them that a foreign power could just buy satellite surveillance photos — they don’t need to “spy” through commercial drones!) is like debating a moldy sponge. All you get for your efforts is a rotting odor.

It was thought that the current defense appropriation bill might push through a DJI ban. This was likely to include DJI drones, cameras, audio equipment, and other products — either import bans alone, or more likely import bans combined with telling the FCC to prohibit their use of U.S. radio frequencies, which could also in theory — but probably not in practice — block use of these DJI products already received and in routine use in the USA.

Instead, with so many crucial public safety and other groups opposed to the ban, the final language puts off a ban for a year, and says to avoid the ban DJI must get an appropriate national security agency to certify that their products are not a security risk.

Proving a negative is always, uh, challenging. But worse — and this is something straight out of Putin’s Russia — the language does not say which national security agency should do this or require any of them to do it. Franz Kafka would love this. Putin would smile.

It’s possible that the next administration will be more receptive to logical arguments about why DJI products should not be banned, and if the ban moves forward DJI is virtually certain to litigate through the courts, as well they should.

But the sheer irresponsibility of politicians wanting to ban such crucial products based on zero evidence and a lot of wild-eyed political posturing is nothing short of disgusting and irresponsible.

So here we are. Blurry photos of stars and planes are being touted as terror drones, with politicians more than happy to latch onto the panic for their own purposes. Actual drones crucial to a vast array of industries and to saving lives are at risk of being banned by politicians who scream “national security” without evidence.

Yeah, I don’t know about the rest of the world, but here in the USA it sure looks like we’ve fallen off the deep end of sanity.

–Lauren–

 

Australia’s Under-16 Social Media Ban Is Doomed

Ah yes. Poodle skirts and bobby socks.  Jimmie Rodgers and the Everly Brothers. Around the world, there seems to be a collective longing for a rose-tinted, 20/20 hindsight, fantasy view of “the good old days” of the 1950s, before those damned computers starting infiltrating so much of our lives.  And social media bans have become the means by which governments hope to force children off their phones and back to sometimes rather violent competitive sports and other ultraviolet light suffused outdoor activities.

It won’t work. The latest example of this yearning for the past is in Australia where, with very broad public support, the government just pushed through (in about a week!) a ban on children under 16 using social media. There are no exceptions for anyone with current accounts. There are no exceptions to allow parents to permit their children to use social media if the parents determine that’s best for their own children. The ban likely will include all of the major social media platforms except (for now at least) YouTube, which is widely used in schools.

Clearly, there have been enormously tragic incidents involving children who were, for example, bullied or otherwise abused over social media. But there are also many examples of the positive benefits of social media helping children who were being abused by family members, for whom access to assistance over social media was crucial. And many examples of isolated children for whom social media has been an important benefit to their mental health. And children who have created educational outreach and other extremely positive projects via social media.

I’m not a sociologist. I’ll leave it to the experts in that and related fields to explain the complex and sometimes competing aspects of social media and young persons.

But I am a technologist. And as such, my view is that Australia’s ban almost certainly won’t work and will end up doing far more damage than the status quo before the law, as it creates a culture of false hopes, push back, and circumvention.

Like all social media age gating laws, the Australian law would require ALL users of social media to be age verified. That’s how you (in theory) block the children. The law wisely does not penalize parents or children who circumvent the law, instead depending on financial fines against the social media firms. And at the very last minute, a provision was apparently added that prohibits requiring use of government credentials for identification. This was a positive change, because as I’ve discussed many times, age-verification based on government credentials for websites access would lead almost inevitably to broad tracking of Internet usage by the government in much the style that users in China are subjected to today.

So how would Australia do age verification for this law? The law is planned to take effect a year from now, and an age verification trial is supposed to take place before then. Most frequently discussed are AI-based (oh boy, here we go …) techniques to analyze users’ faces, online behavior patterns, types of content they access … and so on. 

It doesn’t take much imagination to create a long list of ways that such techniques not only have errors in both directions (passing users who were too young, blocking users who were actually old enough) — even in the absence of circumvention techniques. E.g., how do you determine if a child is 15 and a half years old or 16 years old from their face? Uh huh. Hell, I’ve known people who were 30 and had faces that looked like they were 15.

But even beyond the mumbo jumbo of supposed AI-based solutions, the list of relatively straightforward circumvention techniques seems almost endless. And anyone who thinks that children won’t figure this stuff out are in for a rude awakening.

One obvious problem for the law will be VPNs. Unless the Australian government plans to detect/ban VPN usage — which would have enormous negative consequences — simply creating accounts on these social media platforms that appear to be coming from countries other than Australia is an obvious circumvention methodology.

Attempting to ban children from social media won’t work. It will make a complicated situation even worse, and it technically is impractical without creating a hellscape of government-verified identity Internet usage tracking for all users of all ages — and even then circumvention techniques would still exist.

The desire to eliminate the negative consequences of social media is a laudable one. And there’s much that could be done by social media firms to better prevent abuse of their platforms, especially when children are targeted for such abuse. 

But age-based bans are a “feel good” effort that will create new harms and will fail. They should be firmly rejected.

–Lauren–

DOJ’s Proposed Antitrust “Remedies” Against Google Would Be a Disaster

Despite my continuing differences with various specific aspects of Google operations that I feel could be straightforwardly improved to the benefit of their users, I can’t emphasize enough what an utter disaster the DOJ proposed Google antitrust “remedies” would be for the privacy and security of their users and consumers more broadly, and for the overall usability of these crucial services as well.

Google privacy and security standards and teams are world class, and I have enormous trust in them. Keeping email and the many other Google services that billions of people rely on in their everyday lives safe and secure is an enormously complex and continually evolving effort, and key to this — as well as making sure that users’ data entrusted to Google is not put at risk by firms with less stringent standards than Google — is the integrated nature of the Chrome browser, Android, and other aspects of Google services. Even with this integration, it’s a monumental task.

Breaking these aspects of Google apart in the name of supposed “competition” — that would actually only make most non-technical users’ interactions with tech more confusing and complicated, just what consumers clearly don’t want — would be a gargantuan mistake that consumers would unfortunately end up paying for in a myriad number of ways for many years.

Google is far from perfect, but DOJ seems hell-bent on pushing an antitrust agenda in this case that would make consumers’ lives far worse instead of better. Whether that’s a result of DOJ ignoring the technical realities in play or simply not really understanding them, it’s the wrong path and would lead to a very bad place indeed for all of us.

–Lauren–

DOJ vs. Google: Users have the most to lose

Despite my ongoing concerns over various of the directions that current management has been taking Google over recent years, I must state that I agree with Google that the kinds of radical antitrust “remedies” — and “radical” is the appropriate word — apparently being contemplated by DOJ, would almost certainly be a disaster for ordinary users’ privacy, security, and overall ability to interact with many aspects of related technologies that they depend on every day.

These systems are difficult enough to keep reasonably user friendly and secure as it is — and they certainly should continue to be improved in those areas. But what DOJ is reportedly considering would be an enormous step backwards and consumers would be the ultimate victims of such an approach.

–Lauren–

“I Am the Very Model of a Google AI Overview”

“I Am the Very Model of a Google AI Overview”
Lauren Weinstein

To the tune of “I Am the Very Model of a Modern Major-General” (with apologies to Gilbert & Sullivan)

– – –

I am the very model of a Google AI Overview.
I know what you’ll be searching for,
At least an hour ahead of you.

My answers aren’t always right,
In fact they’re often quite a brawl.
But hey we’re Google and you’re here,
So that’s the way the chips will fall.

We really don’t like those blue links,
They’re so old-fashioned we agree.
Why bother sites with viewers,
When users can just come here to me?

Of course some sites may suffer,
And yeah that’s a bit tragic to see.
But while we aren’t evil,
Face the facts it’s all about money!

Now if your Google search results,
No longer seem of quality,
It’s not our fault,
The problem is,
Your queries are just all lousy.

So welcome to my AI world,
An LLM can’t think things through,
I am the very model of a Google AI Overview.

– – –

–Lauren–

Generative AI Is Being Rammed Down Our Throats

The technical term for what’s happening now with Artificial Intelligence, especially generative AI, is NUTS. I mean it’s not just Google, but Microsoft too, with OpenAI’s ChatGPT. The firms are just pouring out half-baked AI systems and trying to basically ram them down our throats whether we want them or not, by embedding them into everything they can, including in irresponsible or even potentially hazardous ways. And it’s all in the search of profits at our expense.

I’ll talk specifically about Google Search shortly, but so much of this crazy stuff is being deployed. Microsoft wants to record everything you do on a PC through an AI system. Both Google and Microsoft want to listen in on your personal phone calls with AI. YouTube is absolutely flooded with low quality AI junk videos, making it ever harder to find accurate, useful videos.

Google is now pushing their AI “Help me write” feature which feeds your text into their AI from all over the place including in many Chrome browser context menus, where in some cases they’ve replaced the standard text UNDO command with “Help me write”. And Help me write is so easy to trigger accidentally that you not only could end up feeding personal or business proprietary information into the AI, but also to the human AI trainers who Google notes can also see this kind of data.

OK, now about Google Search. For quite some time now many people have been noticing a decline in the quality of Google search results — and keep in mind that Google does the overwhelmingly vast percentage of searches by Internet users. So Google has recently been rolling out to regular Google Search results what they call AI Overviews, and these are AI-generated answers to what seem like most queries now, that can push all the actual site links — the sites from which Google AI presumably pulled the data to formulate those answers — actually push them so far down the page that few users will ever see them, and this potentially starves those sites that provided that data from getting the user views they need to stay up and running.

Some of the AI overview answers have links but often they’re dim and obscure and almost impossible to even see unless you have perfect 20/20 vision and very young eyes. On top of that many of these AI Overview answers are just banal, stupid, and often just confused or plain wrong, mixing up accurate and inaccurate information, sometimes in ways that could actually be unsafe, for example when they’re wrong about health-related questions. This is all very different from the kinds of top of page answers that Google has shown for straightforward search queries like math questions or definitions of words or when was a particular film released that they’ve provided for some time now.

These AI Overview answers are showing up all over the place and like I said, much of the time their quality is abysmal. Now of course if you’re not knowledgeable about a subject you’re asking about, you might assume a misleading or wrong AI Overview answer is correct, and since Google has now made it less likely that you’ll scroll down the page to find and visit sites that may have accurate information, it’s a real mess. There are some tricks with Google Search URLs that I’ve seen to bypass some of this for now, but Google could disable those at any time.

What’s really needed is a way for users to turn all of this generative AI content completely off until such a time, if ever, that a given user decides they want to turn it on again. Or better yet, these AI features should be ENTIRELY opt-in, that is, turned off UNTIL you decide you want to use them in the first place.

So once again we see that fears of super intelligent AIs wiping out humanity are not what we should be worried about right now. What we need to be concerned about are the ways that Big Tech AI companies are hell-bent on forcing generative AI systems into all aspects of our private lives in ways that are often unwanted, confusing, irresponsible, or even worse. And the way things seem to be going right now, there’s no indication that these firms are interested in how we feel about all this.

And that’s not going to change so long as we’re willing to continue using their products without making it clear to them that we won’t indefinitely tolerate their push to stuff generative AI systems into our lives whether we want them there or not.

–Lauren–

Evil

Every day it becomes ever clearer. All the talk of super-intelligent AI destroying humanity was and is nonsense. It’s the CEOs running the Big Tech firms pushing AI into every aspect of our lives in increasingly irresponsible and dangerous ways who are the villains in this saga, not the machines themselves. The machines are just tools. Like hammers, they can be used to build a home or smash in a skull.

For evil, you have to look to humans and their corporate greed.

–Lauren–

The Nightmare of Google Account Recovery Failures

Let me be very clear about why I am, frankly, so angry with Google over their Account Recovery failures.

I have on numerous occasions directly proposed to Google a variety of significant improvements to their current Account Recovery processes.

While their existing procedures successfully recover many accounts daily, they tend to fail disproportionally for innocent non-techie users and other marginalized groups like seniors and more — users who still are dependent on Google for email and data storage in a world where other support options (like telephone and non-email billing and support) are being rapidly marginalized by firms as cost-cutting measures.

These are often users who barely understand how to use these systems that they’ve in many cases essentially been coerced into using. When they’re locked out, they can lose everything — email, photos, and other personal data crucial to their lives.

I have on multiple occasions proposed specific improvements to Google’s procedures that could be invoked optionally by users who desperately needed access to their accounts that were locked out without good cause, and methods by which Google would have the means for cost recovery of the additional (and typically not extensive) additional support measures required to accomplish this.

My proposals have never received serious consideration by Google. I always receive the same responses. “We recover lots of accounts and that’s good enough.” “Nobody is forced to use Google.” “People who don’t properly maintain their recovery addresses and phone numbers have nobody but themselves to blame.”

Unspoken and unwritten but clearly part of the underlying message: “We just don’t care about those categories of users. Hopefully they’ll go away and never come back.”

It’s a travesty. I’ll keep trying, because hope springs eternal, and I’m too old now to give up on even apparently hopeless causes. Silly me, I guess. Take care.

–Lauren–

Google and Seniors

Google refuses to create a specific role for someone to oversee the issues of older users, who depend on Google for so many things but so often get the shaft and lose everything when something goes wrong with their accounts. Google should AT LEAST (I still think the role is crucial), be providing focused help resources and a recurring (at least monthly) blog to help this class of users (“Google for Seniors”, “Google Seniors Blog”).

This would all be specifically oriented toward helping these users deal with the kinds of Google Account and other Google problems that so often disproportionately affect this group.

This would be good for these users (who Google unreasonably and devastatingly considers to be an unimportant segment of their user base) and frankly good for Google’s PR in a highly challenging and toxic political environment.

I’m so tired of having so many people in this category approach me for help with account and other Google issues because they never understood the existing Google resources that, frankly, are written for a different level of tech expertise and understanding.

I have more detailed thoughts on this if anyone cares. No, I’m not holding my breath on this one.

–Lauren–

About Google and Location Privacy

You may have seen a lot of press over the last few days about Google moving location data by default to be on-device (e.g., your phone) rather than stored centrally (and encrypted if you choose to store it centrally), and how this will help prevent abuses of broad “geofence” warrants that law enforcement uses to get broad data about devices in a particular specified area.

These are all positive moves by Google, but keep in mind that Google has long provided users with control over their location history — how long it’s kept, the ability for users to delete it manually, whether it’s kept at all, etc.

But when is the last time your mobile carrier offered you any control over the detailed data they collect on your devices’ movements? If you’re like most people, the answer seems to be never. And while cellular tracking may not usually be as precise as GPS, these days it can be remarkably accurate.

One wonders why there’s all this talk about Google, when the mobile carriers are collecting so much location data that users seem to have no control over at all, data that is of similar interest to law enforcement for mass geofence warrants, one might assume.

Think about it.

–Lauren–

Google’s Inactive Account Policy and Phishing Attacks Concerns

As you may know, Google has recently begun a protocol to delete inactive Google accounts, with email notices going out to the account and recovery addresses in advance as a warning.

Leaving aside for the moment the issue that so many people who have lost track of accounts probably have no recovery address specified (or an old one that no longer reaches them), there’s another serious problem.

A few days ago I received a legitimate Google email about an older Google account of mine that I haven’t used in some time. I was able to quickly reauthenticate it and bring it back to active status.

However, this may be the first situation (there may be earlier ones, but I can’t think of any offhand) where Google is actively “out of the blue” soliciting people to log into their accounts (and typically, older accounts that I suspect are more likely not to have 2-factor authentication enabled, for example).

This is creating an ideal template for phishing attacks.

We’ve long strongly urged users not to respond to emailed efforts to get them to provide their login credentials when they have not taken any specific actions that would trigger the need for logging in again — and of course this is a very common phishing technique (“You need to verify your account — click here.” “Your password is expiring — click here.”, etc.)

Unfortunately, this is essentially the form of the Google “reactivate your account” email notice. And for ordinary busy users who may get confused to see one of these pop into their inbox suddenly, they may either ignore them thinking that they are a phishing attack (and so ultimately lose their account and data), or may fall victim to similar appearing phishes leveraging the fact that Google is now sending these out.

I’ve already seen such a phish, claiming to be Google prompting with a link for a login to a supposedly inactive account. So this scenario is already occurring. The format looked good, and it was forged to appear to be from the same Google address as used for the legitimate Google inactive account notification emails.  Even the internal headers had been forged to make it appear to be from  Google. The top level “Received from” header line IP address was wrong of course, but how many people would notice this or even look at the headers to see this in the first place?

I can think of some ways to help mitigate these risks, but as this stands right now I am definitely very concerned. 

–Lauren–

In Support of Google’s Progress On AI Content Choice and Control

Last February, in:

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

I suggested expansion of the existing Robots Exclusion Protocol (e.g. “robots.txt”) as a path toward helping provide websites and creators control over how their contents are used by AI systems.

Shortly thereafter, Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.

While it’s true that adherence to robots.txt (or related webpage Meta tags — also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.

This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we’re only at the beginning of a long road, and asking for a wide range of stakeholder inputs.

I believe of particular importance is Google’s desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.

Also of note is Google’s endorsement of the excellent “AI taxonomy” concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.

Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google’s continuing progress in these regards.

–Lauren–

Radio Transcript: Google Passkeys and Google Account Recovery Concerns

As per requests, this is a transcript of my national network radio report earlier this week regarding Google passkeys and Google account recovery concerns.

 – – –

So there really isn’t enough time tonight to get into any real details on this but I think it’s important that folks at least know what’s going on if this pops up in front of them. Various firms now are moving to eliminate passwords on accounts by using a technology called “passkeys” which bind account authentication to specific devices rather than depending on passwords.

And theoretically passkeys aren’t a bad idea, most of us know the problems with passwords when they’re forgotten or stolen, used for account phishing — all sorts of problems. And I myself have called for moving away from passwords. But as we say so often, the devil is in the details, and I’m not happy with Google’s passkey implementation as it stands right now. Google is aggressively pushing their users currently, asking if they want to move to a passwordless experience. And I’m choosing not to accept that option right now, and while the choice is certainly up to each individual, I myself don’t recommend using it at this stage.

Without getting too technical, one of my concerns is that anyone who can authenticate a device that has Google passkeys enabled on it, will have full access to those Google accounts without having to have any additional information — not even an additional authentication step. And this means that if — as is incredibly common — someone with a weak PIN for example on their smartphone, loses that device or it’s stolen, again, happens all the time, and the PIN was eavesdropped or guessed, those passkeys could let a culprit have full access to the associated Google accounts and lock out the rightful owner from those accounts before they had a chance to take any actions to prevent it.

And I’ve been discussing my concerns about this with Google, and their view — to use my words — is that they consider this to be the greatest good for the greatest number of people — for whom it will be a security enhancement. The problem is that Google has a long history of mainly being concerned about the majority, and leaving behind vast numbers of users who may represent a small percentage but still number in the millions or more. And these often are the same people who through no fault of their own get locked out of their Google accounts, lose access to their email on Gmail, photos, other data, and frankly Google’s account recovery systems and lack of useful customer service in these regards have long been a serious problem.

So I really don’t want to see the same often nontechnical folks who may have had problems with Google accounts before, to be potentially subjected to a NEW way to lose access to their accounts. Again it’s absolutely an individual decision, but for now I’m going to skip using Google passkeys and that’s my current personal recommendation.

–Lauren–

Google is making their weak, flawed passkey system the default login method — I urge you NOT to use it!

Google continues to push ahead with its ill-advised scheme to force passkeys on users who do not understand their risks, and will try push all users into this flawed system starting imminently.

In my discussions with Google on this matter (I have chatted multiple times with the Googler in charge of this), they have admitted that their implementation, by depending completely on device authentication security which for many users is extremely weak, will put many users at risk of their Google accounts being compromised. However, they feel that overall this will be an improvement for users who have strong authentication on their devices.

And as for ordinary people who already are left behind by Google when something goes wrong? They’ll get the shaft again. Google has ALWAYS operated on this basis — if you don’t fit into their majority silos, they just don’t care. Another way for Google users to get locked out of their accounts and lose all their data, with no useful help from Google.

With Google’s deficient passkey system implementation — they refuse to consider an additional authentication layer for protection — anyone who has authenticated access to your device (that includes the creep that watched you access your phone in that bar before he stole it) will have full and unrestricted access to your Google passkeys and accounts on the same basis. And when you’re locked out, don’t complain to Google, because they’ll just say that you’re not the user that they’re interested in — if they respond to you at all, that is.

“Thank you for choosing Google.”

–Lauren–

UK Passage of Online Safety Bill to Create Chinese-Style Internet Tracking and Censorship — Coming Soon to U.S.?

In the 2005 film “V for Vendetta” a fictional UK government has turned into a tightly censored, tracked, and controlled hellscape, with technology used to control citizens in every way possible. The UK has now taken a massive step toward making that horror a reality, with the passage of likely the most misguided legislation in the country since the Norman invasion of 1066.

I won’t detail their Online Safety Bill here — you can find endless references by searching yourself — but the vast, blurry, nebulous, misguided rules for “protecting children from ‘harmful’ content” — a slippery slope bad enough on its own, quickly expanded into a Chinese Internet style virtual steel collar for every UK resident, chained to the government in every aspect of their online lives.

The mandated social media platform ID age verification requirements, which will ultimately require the showing of government IDs for access to sites, alone will create the opportunity for virtually every action of every user of the Internet in the UK to be tracked by the government and its minions in ever expanding ways over time.

Be careful what sites you visit or what you ask or say on them. In China, you can simply vanish under such circumstances. And in the UK? Similar disappearances coming soon, perhaps, as every site you visit, no matter the topic related to business, medical concerns, or other aspects of your family’s private and personal life, will ultimately be linked to you in government databases.

VERY similar *bipartisan* legislative efforts are taking place here in the U.S., though the U.S. court system is creating additional hurdles for their perpetrators here, at least for the moment. For now.

While some activists and legislators spend their time ranting about Internet advertising, governments around the world are working to turn the Internet into a pervasive tool for tracking your every online move and thought, permanently linked to your government IDs.

We’ve seen it in Communist China. Now we see it in so-called democracies.

Open your eyes — while you still can. 

–Lauren–

The Potential Privacy Problems With YouTube’s Family Plan “Suggestion Leakage”

I love YouTube. I consider it to be a wonder of the world for an array of reasons. Its scale is — well, the technical term is “mindbogglingly enormous.” I subscribe to YouTube Premium (primarily to obliterate the ads — I don’t use ad blockers), and as far as I’m concerned it’s the best streaming service value on the planet. If I had to choose one streaming service only — it would be YouTube Premium, undoubtedly. I have something approaching 7000 favorited videos on YT, and I sometimes imagine that there’s a whole cluster in a dark corner of a Google data center singularly devoted to managing my giganormous watch history.

Does YT have problems? Yup. Some YT creators have to deal with inappropriate strikes and takedowns — I’ve tried to assist a bunch of these users with these sorts of disruptions over the years. Some people complain of bad video suggestions pushing them in dark directions — though this has never been an issue for me — the suggestions I get are generally great, though I do take time to train the algorithm as to what I do and don’t like. If you just use YT not-logged in and/or don’t train, you’ll probably get less favorable results. Basically that’s your choice.

Obviously, no technology is perfect, and at YT’s scale even if only a tiny fraction of suggestions are problematic, it can still be a large number in absolute terms. That’s life. I still love YouTube.

There’s an oddity though with YT that I think is worth mentioning. It’s not a big concern in the scheme of things, but it really shouldn’t be happening.

This relates to the YouTube Premium “Family Plan” that lets you bundle multiple separate Google accounts in a household together so that they all have the benefits of Premium, at a better price than each subscribing to Premium separately. Under FP, each of the associated accounts is free of ads, etc., but is still separate — with their own YT play history, etc. — and can view different content simultaneously (normally, a Premium account can only view content on one device at a time). 

But a strange thing can happen with Family Plan. The videos being watched by one account on the plan can affect the suggestions on other accounts on the plan, even though they should be entirely separate in this particular respect.

This is most often noticed when a topic starts to pop up in the suggestions for one FP member that are totally odd for them — for example, a subject that they never view videos about. And it turns out — if the members of the FP compare notes — that some other member of the plan was watching videos on that topic, and the YT videos/channels being watched by FP member A are showing up in the suggestions for FP member B. And so on.

Most of the time this isn’t a serious concern, and can even be interesting in terms of surfacing new topics. But of course there are intrinsic privacy considerations as well. It isn’t good policy for the YT viewing habits of different family members to be intermingled in that way, without their specifically asking for such sharing. The potential family problems that could occur as a result in some cases are fairly obvious.

This has been going on with Family Plan for years, and I’ve brought this up with Google/YT myself in the past. And the responses I’ve always gotten back have either been that “it can’t happen” or “it shouldn’t happen” and … that’s pretty much where it’s been left hanging each time.

But it does still happen (I have a new report just this morning) and yeah, it really shouldn’t.

Again, not an enormous problem in the scheme of things, but not trivial either, and it’s something that definitely should be fixed.

–Lauren–

Artificial Intelligence at the Crossroads

Suddenly there seems to be an enormous amount of political, regulatory, and legal activity regarding AI, especially generative AI. Much of this is uncharacteristically bipartisan in nature.

The reasons are clear. The big AI firms are largely depending on their traditional access to public website data as the justification for their use of such data for their AI training and generative AI systems.

This is a strong possibility that this argument will ultimately fail miserably, if not under current laws then under new laws and regulations likely to be pushed through around the world, quite likely in a rushed manner that will have an array of negative collateral effects that could actually end up hurting many ordinary people.

Google for example notes that they have long had access to public website data for Search.

Absolutely true. The problem is that generative AI is wholly different in terms of its data usage than anything that has ever come before.

For example, ordinary Search provides a direct value back to sites through search results pages links — something that the current Google CEO has said Google wants to de-emphasize (colloquially, “the ten blue links”) in favor of providing “answers”.

Since the dawn of Internet search sites many years ago, search results links have long represented a usually reasonable fair exchange for public websites, with robots.txt (Robots Exclusion Protocol) available for relatively fine-grained access control that can be specified by the websites themselves, and which at least the major search firms generally have honored.

But generative AI answers eliminate the need for links or other “easy to see” references. Even if “Google it!” or other forms of “more information” links are available related to generative AI answers at any AI firm’s site, few users will bother to view them.

The result is that by and large, today’s generative AI systems by their very nature return essentially nothing of value to the sites that provide the raw knowledge, data, and other information that powers AI language/learning models. 

And typically, generative AI answers (leaving aside rampant inaccuracy problems for now) are like high school term papers that haven’t even included sufficient (if any) inline footnotes and comprehensive bibliographies with links.

A very quick “F” grade at many schools.

I have proposed extending robots.txt to help deal with some of these AI issues — and Google also very recently proposed discussions around this area.

Giving Creators and Websites Control Over Generative AI:
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

But ultimately, the “take — and give back virtually nothing in return” modality of many AI systems inevitably leads toward enormous pushback. And I do not sense that the firms involved fully understand the cliff that they’re running towards in a competitive rush to push out AI systems long before they or the world at large are ready for them.

These firms can either grasp the nettle themselves and rethink the problematic aspects of their current AI methodologies, or continue their current course and face the high probability that governmental and public concerns will result in major restrictions to their AI projects — restrictions that may seriously negatively impact their operations and hobble positive AI applications for users around the world long into the future.

–Lauren–

Thoughts on AI Regulation

Greetings. The excellent essay:

https://circleid.com/posts/20230628-the-eu-ai-act-a-critical-assessment

(by Anthony Rutkowski) serves to crystallize many of my concerns about the current rush toward specific approaches to AI regulation before the issues are even minimally understood, and why I am so concerned about negative collateral damage in these kinds of regulatory efforts.

There is widespread agreement that regulation of AI is necessary, both from within and outside the industry itself, but as you’ve probably grown tired of seeing me write, “the devil is in the details”. Poorly drafted and rushed AI regulation could easily do damage above and beyond the realistic concerns (that is, the genuine, non-sci-fi concerns) about AI itself.

It’s understandable that the very rapid deployments of AI systems — particularly generative AI — are creating escalating anxiety regarding an array of related real world controversies, an emotion that in many cases I obviously share.

However, as so often happens when governments and technologies intersect, the potential for rushed and poorly coordinated actions severely risks making these situations much worse rather than better, and that’s an outcome to be avoided. Given what’s at stake, it’s an outcome to be avoided at all costs.

I don’t have any magic wands of course, but in future posts I will discuss aspects of what I hope are practical paths forward in these matters. I realize that there is a great deal of concern (and hype) about these issues, and I welcome your questions. I will endeavor to answer them as best I can. 

–Lauren–

A Proposal for “Enhanced Recovery Services” for Locked Out Google Accounts

This post could get very long very quickly, so instead I’m going to endeavor to keep this introductory discussion brief, with an array of crucial details to come later. 

In my recent posts:

An Example of a Very Sad Google Account Recovery Failure — and How It Affects Real People

https://lauren.vortex.com/2023/05/17/google-account-recovery-failure-sad

and:

Potentially Serious Issues with Google’s Announced Inactive Accounts Deletion Policy

https://lauren.vortex.com/2023/05/16/google-inactive-accounts-deletion

(and frankly, in many related postings over many years in this blog and other venues), I discussed the continuing problems of honest Google users being locked out of their Google accounts, often with a total and permanent loss of all their data (Gmail, photos, Drive files, etc.) that they entrusted to Google.

These lockouts can occur for an array of reasons — problems with login credentials, third-party hacking of accounts including (but not limited to) malware, Google believing that violations of its Terms of Service have occurred, and many other events.

Each of these is an entire complex topic area that I won’t detail in this post.

But the bottom line is that many Google users who feel that they have done nothing wrong find themselves locked out of their accounts — and crucially — their data at Google, and are unable to successfully navigate the existing largely automated account recovery procedures that Google currently provides.

Generally speaking, once a user who has been locked out of a Google account reaches this point, they are, to use the vernacular, SOL — there’s no way to proceed. Usually their data, no matter how important and precious to their lives, is lost to them forever.

To be sure, sometimes the failure to recover a Google account is rooted in the failure of users to provide or keep up to date the recovery information that Google requests for the very purpose of easing account recovery paths.

But the reality is that many users forget about keeping these current, or are reluctant to provide phone numbers and/or alternative email addresses (if they even have them) in the first place. That’s just the way it is.

And ultimately, even at Google’s enormous scale of users who use its services for free, there is something inherently wrong about honest users who lose so much of their lives — that Google has encouraged them to entrust to Google — when an unrecovered account lockout occurs.

Over and over again — in a manner reminiscent of the film “Groundhog Day” — desperate Google users who have been locked out have asked me if there was someone they could pay to help them? Isn’t there some way, they ask, for Google to do a deeper dive into the circumstances of their lockouts, the users’ official government IDs for proof, and other methods to authenticate them back into their Google accounts — as can be done at virtually all financial institutions and most other firms.

Right now the answer is no.

But the answer should be and could be yes, if Google made the decision — by no means a trivial one! — to provide the means for such “enhanced recovery services” for Google Accounts, which in some cases (e.g., when a user is indeed at fault as the root cause of the lockout) could be chargeable (that is, paid) services as a means to help defray the additional costs involved.

This is a very complicated area with an array of trade-offs and nuances. It’s likely to be highly controversial. 

But as far as I’m concerned, the status quo of how Google account recoveries work (or fail) is no longer acceptable, especially in the current regulatory and political environment.

In future discussions, I will detail my thinking of how “enhanced recovery” for Google accounts could be accomplished in practice, and how it would benefit Google’s users, Google itself, and the wider global community that depends upon Google.

Take care, all.

–Lauren–