Big Tech Needs to Vastly Improve Their Public Communications — or Potentially Face a Political Train Wreck Over AI (and More)

In several of my past recent posts:

The “AI Crisis”: Who Is Responsible?
https://lauren.vortex.com/2023/04/09/the-ai-crisis-who-is-responsible

State and Federal Internet ID Age Requirements Are Hell-Bent on Turning the Internet Into a Chinese-Style Internet Nightmare
https://lauren.vortex.com/2023/03/23/government-internet-id-nightmare

Giving Creators and Websites Control Over Generative AI
https://lauren.vortex.com/2023/02/14/giving-creators-and-websites-control-over-generative-ai

and others in various venues, I have expressed concerns over the “perfect storm” that is now circling “Big Tech” from both sides of the political spectrum, with both Republicans and Democrats proposing (sometimes jointly, sometimes in completely opposing respects) “solutions” to various Internet-related issues — with some of these issues being real, and others being unrealistically hyped.

The latest flash point is AI — Artificial Intelligence — especially what’s called generative AI — publicly seen mainly as so-called AI chatbots.

I’m not going to repeat the specifics of my discussions on these various topics here, except in one respect.

For many (!) years I have asserted that these Big Tech firms (notably Google, but the others as well to one degree or another) have been negligently deficient in their public communications, failing to adequately assure that ordinary non-technical people — and the politicians that they elect — understand the true nature of these technologies.

This means both the positive and negative aspects of tech. But the important point is that the public needs to understand the reality of these systems, and not be misguided by misinformation and often politically-biased disinformation that fill the information vacuum left by these firms, often out of a misguided and self-destructive fear of so-called “Streisand Effects”, which the firms are afraid will occur if they mention these issues in any depth.

It is clear that such fears have done continuing damage to these firms over the years, while robust public communications and public education — not looking down at people, but helping them to understand! — could have instead done enormous good.

I’ve long called for the hiring of “ombudspersons” or liaisons — or whatever you want to call them — to fill these important, particular communications roles. These need to be dedicated roles for this purpose.

The situation has become so acute that it may now be necessary to have roles specific to AI-related public communications to help avoid the worst of the looming public relations and political catastrophes, that could decimate the positive aspects of these systems, and over time seriously damage the firms themselves.

But far more importantly, it’s society at large that will inevitably suffer when politics and fear win out over a true understanding of these technologies — how they actually impact our world in a range of ways — again, both positive and negative, both now and into the future.

The firms need to do this now. Right now. All of the greatest engineering in the world will not save them (and us!) if their abject public communications failures continue as they have to date.

–Lauren–

The “AI Crisis”: Who Is Responsible?

There is a sense of gathering crisis revolving around Artificial Intelligence today — not just AI itself but also the public’s and governments’ reactions to AI — particularly generative AI.

Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.

I find much more blame — and the related central problem of the moment — with some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.

While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments’ reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well — these both come together with any new technology) to the world.

By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance — and limitations! — of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.

If this doesn’t start changing for the better immediately, today’s controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future. 

–Lauren–