There is a sense of gathering crisis revolving around Artificial Intelligence today — not just AI itself but also the public’s and governments’ reactions to AI — particularly generative AI.
Personally, I find little blame (not zero, but relatively little) with the software engineers and associated persons who are actually theorizing, building, and training these systems.
I find much more blame — and the related central problem of the moment — with some non-engineers (e.g., some executives at key levels of firms) who appear to be pushing AI projects into public view and use prematurely, out of fear of losing a seemingly suddenly highly competitive race, in some cases apparently deemphasizing crucial ethical and real world impact considerations.
While this view is understandable in terms of human nature, that does not justify such actions, and I fear that governments’ reactions are heading toward a perfect storm of legislation and regulations that may be even more problematic than the premature release of these AI systems has been for these firms and the public. This may potentially set back for years critical work in AI that has the potential to bring great benefits (and yes, risks as well — these both come together with any new technology) to the world.
By and large the Big Tech firms working on AI are doing a negligent and ultimately self-destructive job at communicating the importance — and limitations! — of these systems to the public, leaving a vacuum to be filled with misinformation and disinformation to gladden the hearts of political opportunists (both on the Right and the Left) around the planet.
If this doesn’t start changing for the better immediately, today’s controversies about AI are likely to look like firecrackers compared with nuclear bombs in the future.
–Lauren–