Generative AI Is Being Rammed Down Our Throats

The technical term for what’s happening now with Artificial Intelligence, especially generative AI, is NUTS. I mean it’s not just Google, but Microsoft too, with OpenAI’s ChatGPT. The firms are just pouring out half-baked AI systems and trying to basically ram them down our throats whether we want them or not, by embedding them into everything they can, including in irresponsible or even potentially hazardous ways. And it’s all in the search of profits at our expense.

I’ll talk specifically about Google Search shortly, but so much of this crazy stuff is being deployed. Microsoft wants to record everything you do on a PC through an AI system. Both Google and Microsoft want to listen in on your personal phone calls with AI. YouTube is absolutely flooded with low quality AI junk videos, making it ever harder to find accurate, useful videos.

Google is now pushing their AI “Help me write” feature which feeds your text into their AI from all over the place including in many Chrome browser context menus, where in some cases they’ve replaced the standard text UNDO command with “Help me write”. And Help me write is so easy to trigger accidentally that you not only could end up feeding personal or business proprietary information into the AI, but also to the human AI trainers who Google notes can also see this kind of data.

OK, now about Google Search. For quite some time now many people have been noticing a decline in the quality of Google search results — and keep in mind that Google does the overwhelmingly vast percentage of searches by Internet users. So Google has recently been rolling out to regular Google Search results what they call AI Overviews, and these are AI-generated answers to what seem like most queries now, that can push all the actual site links — the sites from which Google AI presumably pulled the data to formulate those answers — actually push them so far down the page that few users will ever see them, and this potentially starves those sites that provided that data from getting the user views they need to stay up and running.

Some of the AI overview answers have links but often they’re dim and obscure and almost impossible to even see unless you have perfect 20/20 vision and very young eyes. On top of that many of these AI Overview answers are just banal, stupid, and often just confused or plain wrong, mixing up accurate and inaccurate information, sometimes in ways that could actually be unsafe, for example when they’re wrong about health-related questions. This is all very different from the kinds of top of page answers that Google has shown for straightforward search queries like math questions or definitions of words or when was a particular film released that they’ve provided for some time now.

These AI Overview answers are showing up all over the place and like I said, much of the time their quality is abysmal. Now of course if you’re not knowledgeable about a subject you’re asking about, you might assume a misleading or wrong AI Overview answer is correct, and since Google has now made it less likely that you’ll scroll down the page to find and visit sites that may have accurate information, it’s a real mess. There are some tricks with Google Search URLs that I’ve seen to bypass some of this for now, but Google could disable those at any time.

What’s really needed is a way for users to turn all of this generative AI content completely off until such a time, if ever, that a given user decides they want to turn it on again. Or better yet, these AI features should be ENTIRELY opt-in, that is, turned off UNTIL you decide you want to use them in the first place.

So once again we see that fears of super intelligent AIs wiping out humanity are not what we should be worried about right now. What we need to be concerned about are the ways that Big Tech AI companies are hell-bent on forcing generative AI systems into all aspects of our private lives in ways that are often unwanted, confusing, irresponsible, or even worse. And the way things seem to be going right now, there’s no indication that these firms are interested in how we feel about all this.

And that’s not going to change so long as we’re willing to continue using their products without making it clear to them that we won’t indefinitely tolerate their push to stuff generative AI systems into our lives whether we want them there or not.