UPDATE (April 4, 2019): Google has announced that due to the furor over ATEAC (their newly announced external advisory panel dealing with AI issues), they have dissolved the panel entirely. As I discuss in the original post below, AI is too important for our typical political games — and closed-minded unwillingness to even listen to other points of view — to hold sway, and such panels are potentially an important part of the solution to that problem. As I noted, I disagree strenuously with the views of the panel member (and their own organization) that was the focus of the intense criticism that apparently pressured Google into this decision, but I fear that an unwillingness to permit such organizations to even be heard at all in such venues will come back to haunt us mightily in our toxic political environment.
– – –
Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).
Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).
A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels.
Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week.
Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.
This is all extraordinarily worrisome.
While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.
Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.
“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.
Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits.
The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.
I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.
We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice.
AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.