A Major New Privacy-Positive Move by Google

Views: 503

Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:

The Google Page That Google Haters Don’t Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google’s “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.

The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.

This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.

Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow. 

My kudos to the Google teams involved!

–Lauren–

Could AI Help Prevent Mass Shootings?

Views: 371

Could machine learning/AI techniques help to prevent mass shootings or other kinds of terrorist attacks? That’s the question. I do not profess to know the answer — but it’s a question that as a society we must seriously consider.

A notable relatively recent attribute of many mass attacks is that the criminal perpetrators don’t only want to kill, they want as large an audience as possible for their murderous activities, frequently planning their attacks openly on the Internet, even announcing online the initiation of their killing sprees and providing live video streams as well. Sometimes they use private forums for this purpose, but public forums seem to be even more popular in this context, given their potential for capturing larger audiences.

It’s particularly noteworthy that in some of these cases, members of the public were indeed aware of such attack planning and announcements due to those public postings, but chose not to report them. The reasons for the lack of reporting can be several. Users may be unsure whether or not the posts are serious, and don’t want to report someone for a fake attack scenario. Other users may want to report but not know where to report such a situation. And there may be other users who are actually urging the perpetrator onward to the maximum possible violence.

“Freedom of speech” and some privacy protections are generally viewed as ending where credible threats begin. Particularly in the context of public postings, this suggests that detecting these kinds of attacks before they have actually occurred may possibly be viewed as a kind of “big data” problem.

We can relatively easily list some of the factors that would need to be considered in these respects.

What level of resources would be required to keep an “automated” watch on at least the public postings and sites most likely to harbor the kinds of discussions and “attack manifestos” of concern? Could tools be developed to help separate false positive, faked, forged, or other “fantasy” attack postings from the genuine ones? How would these be tracked over time to include other sites involved in these operations, and to prevent “gaming” of the systems that might attempt to divert these tools away from genuine attack planning?

Obviously — as in many AI-related areas — automated systems alone would not be adequate by themselves to trigger full-scale alarms. These systems would primarily act as big filters, and would pass along to human teams their perceived alerts — with those teams making final determinations as to dispositions and possible referrals to law enforcement for investigatory or immediate preventative actions.

It can be reasonably argued that anyone publicly posting the kinds of specific planning materials that have been discovered in the wake of recent attacks has effectively surrendered various “rights” to privacy that might ordinarily be in force.

The fact that we keep discovering these kinds of directly related discussions and threats publicly online in the wake of these terrorist attacks, suggests that we are not effectively using the public information that is already available toward stopping these attacks before they actually occur.

To the extent that AI/machine learning technologies — in concert with human analysis and decision-making — may possibly provide a means to improve this situation, we should certainly at least be exploring the practical possibilities and associated issues.

–Lauren–

Pressuring Google’s AI Advisory Panel to Wear a Halo Is Very Dangerous

Views: 708

UPDATE (April 4, 2019): Google has announced that due to the furor over ATEAC (their newly announced external advisory panel dealing with AI issues), they have dissolved the panel entirely. As I discuss in the original post below, AI is too important for our typical political games — and closed-minded unwillingness to even listen to other points of view — to hold sway, and such panels are potentially an important part of the solution to that problem. As I noted, I disagree strenuously with the views of the panel member (and their own organization) that was the focus of the intense criticism that apparently pressured Google into this decision, but I fear that an unwillingness to permit such organizations to even be heard at all in such venues will come back to haunt us mightily in our toxic political environment.

 – – –

Despite my very long history of enjoying “apocalyptic” and “technology run amok” sci-fi films, I’ve been forthright in my personal belief that AI and associated machine learning systems hold enormous promise for the betterment of our lives and our planet (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

Of course there are definitely ways that we could screw this up. So deep discussion from a wide variety of viewpoints is critical to “accentuate the positive — eliminate the negative” (as the old Bing Crosby song lyrics suggest).

A time-tested model for firms needing to deal with these kinds of complex situations is the appointment of external interdisciplinary advisory panels. 

Google announced its own such panel — the “Advanced Technology External Advisory Council” (ATEAC), last week. 

Controversy immediately erupted both inside and outside of Google, particularly relating to the presence of prominent right-wing think tank Heritage Foundation president Kay Cole James. Another invited member — behavioral economist and privacy researcher Alessandro Acquisti — has now pulled out from ATEAC, apparently due to James’ presence on the panel and the resulting protests.

This is all extraordinarily worrisome. 

While I abhor the sentiments of the Heritage Foundation, an AI advisory panel composed only of “yes men” in agreement more left-wing (and so admittedly my own) philosophies regarding social issues strikes me as vastly more dangerous.

Keeping in mind that advisory panels typically do not make policy — they only make recommendations — it is critical to have a wide range of input to these panels, including views with which we may personally strongly disagree, but that — like it or not — significant numbers of politicians and voters do enthusiastically agree with. The man sitting in the Oval Office right now is demonstrable proof that such views — however much we may despise them personally — are most definitely in the equation.

“Filter bubbles” are extraordinarily dangerous on both the right and left. One of the reasons why I so frequently speak on national talk radio — whose audiences are typically very much skewed to the right — is that I view this as an opportunity to speak truth (as I see it) regarding technology issues to listeners who are not often exposed to views like mine from the other commentators that they typically see and hear. And frequently, I afterwards receive emails saying “Thanks for explaining this like you did — I never heard it explained that way before” — making it all worthwhile as far as I’m concerned.

Not attempting to include a wide variety of viewpoints on a panel dealing with a subject as important as AI would not only give the appearance of “stacking the deck” to favor preconceived outcomes, but would in fact be doing exactly that, opening up the firms involved to attacks by haters and pandering politicians who would just love to impose draconian regulatory regimes for their own benefits. 

The presence on an advisory panel of someone with whom other members may dramatically disagree does not imply endorsement of that individual.

I want to know what people who disagree with me are thinking. I want to hear from them. There’s an old saying: “Keep your friends close and your enemies closer.” Ignoring that adage is beyond foolish.

We can certainly argue regarding the specific current appointments to ATEAC, but viewing an advisory panel like this as some sort of rubber stamp for our preexisting opinions would be nothing less than mental malpractice. 

AI is far too crucial to all of our futures for us to fall into that sort of intellectual trap.

–Lauren–

Don’t Blame YouTube and Facebook for Hate Speech Horrors

Views: 729

Within hours of the recent horrific mass shooting in New Zealand, know-nothing commentators and pandering politicians were already on the job, blaming Facebook, Google’s YouTube, and other large social media platforms for the spread of the live attack video and the shooter’s ranting and sickening written manifesto. 

While there was widespread agreement that such materials should be redistributed as little as possible (except by Trump adviser Kellyanne Conway, who has bizarrely recommended everyone read the latter, thus playing into the shooter’s hands!), the political focus quickly concentrated on blaming Facebook and YouTube for the sharing of the video, in its live form and in later recorded formats.

Let’s be very clear about this. While it can be argued that the very large platforms such as YouTube and Facebook were initially slow to fully recognize the extent to which the purveyors of hate speech and lying propaganda were leveraging their platforms, they have of late taken major steps to deal with these problems, especially in the wake of breaking news like the NZ shooting, including taking various specific actions regarding takedowns, video suggestions, and other related issues as recommended publicly by various observers including myself.

Of course this does not mean that such steps can be 100% effective at very large scales. No matter how many copies of such materials these firms successfully block, the ignorant refrains of “They should be able to stop them all!” continue.

In fact, even with significant resources to work with, this is an extremely difficult technical problem. Videos can be surfaced and altered in a myriad number of ways to try bypass automated scanning systems, and while advanced AI techniques combined with human assets will continually improve these detection systems, absolute perfection is not likely in the cards for the foreseeable future, or more likely ever.

Meanwhile, other demands being bandied about are equally specious.

Calls to include significant time delays in live streams ignore the fact that these would destroy educational live streams and other legitimate programming of all sorts where creators are interacting in real time with their viewers, via chat or other means. Legitimate live news streams of events critical to the public interest could be decimated.

Demands that all uploaded videos be fully reviewed by humans before becoming publicly available are equally utterly impractical. Even with unlimited resources you couldn’t hire enough people to completely preview the enormous numbers of videos being uploaded every minute. Not only would full previews be required — since a prohibited clip could be spliced into permitted footage — there would still be misidentifications. 

Even if you limited such extensive preview procedures to “new” users of the platforms, there’s nothing to stop determined evil from “playing nice” long enough for restrictions to be lifted, and then orchestrating their attacks.

Again, machine learning in concert with human oversight will continue to improve the systems used by the major platforms to deal with this set of serious issues.

But frankly, those major platforms — who are putting enormous resources into these efforts and trying to remove as much hate speech and associated violent content as possible — are not the real problem. 

Don’t be fooled by the politicians and “deep pockets”-seeking regulators who claim that through legislation and massive fines they can fix all this.

In fact, many of these are the same entities who would impose global Internet censorship to further their own ends. Others are the same right-wing politicians who have falsely accused Google of political bias due to Google’s efforts to remove from their systems the worst kinds of hate speech (of which much more spews forth from the right than the left).

The real question is: Where is all of this horrific hate speech originating in the first place? Who is creating these materials? Who is uploading and re-uploading them?

The problem isn’t the mainstream sites working to limit these horrors. By and large it’s the smaller sites and their supportive ISPs and domain registrars who make no serious efforts to limit these monstrous materials at all. In some cases these are sites that give the Nazis and their ilk a nod and a wink and proclaim “free speech for all!” — often arguing that unless the government steps in, they won’t take any steps of their own to control the cancer that metastasizes on their sites. 

They know that at least in the U.S., the First Amendment protects most of this speech from government actions. And it’s on these kinds of sites that the violent racists, antisemites, and other hateful horrors congregate, encouraged by the tacit approval of a racist, white nationalist president.

You may have heard the phrase “free speech but not free reach.” What this means is that in the U.S. you have a right to speak freely, even hatefully, so long as specific laws are not broken in the process — but this does not mean that non-governmental firms, organizations, or individuals are required to help you amplify your hate by permitting you the “reach” of their platforms and venues.

The major firms like Google, Facebook, and others who are making serious efforts to solve these problems and limit the spread of hate speech are our allies in this war. Our enemies are the firms that either blatantly or slyly encourage, support, or tolerate the purveyors of hate speech and the violence that so often results from such speech.

The battle lines are drawn. 

–Lauren–

As Google’s YouTube Battles Evil, YouTube Creators Are at a Crossroads

Views: 1291

UPDATE (February 28, 2019): More updates on our actions related to the safety of minors on YouTube

 – – –

For vast numbers of persons around the globe, YouTube represents one of the three foundational “must have” aspects of a core Google services triad, with the other two being Google Search and Gmail. There are many other Google services of course, but these three are central to most of our lives, and I’d bet that for many users of these services the loss of YouTube would be felt even more deeply than the loss of either or both of the other two!

The assertion that a video service would mean so much to so many people might seem odd in some respects, but on reflection it’s notable that YouTube very much represents the Internet — and our lives — in a kind of microcosm.

YouTube is search, it’s entertainment, it’s education. YouTube is emotion, nostalgia, and music. YouTube is news, and community, and … well the list is almost literally endless.

And the operations of YouTube encompass a long list of complicated and controversial issues also affecting the rest of the Internet — decisions regarding content, copyright, fair use, monetization and ads, access and appeals, and … yet another very long list.

YouTube’s scope in terms of numbers of videos and amounts of Internet traffic is vast beyond the imagination of any mere mortal beings, with the exception of Googlers like the YouTube SREs themselves who keep the wheels spinning for the entire massive mechanism.

In the process of growing from a single short video about elephants at the zoo (more about that 2005 video in a moment) into a service that I personally can’t imagine living without, YouTube has increasingly intersected with the entire array of human social issues, from the most beatific, wondrous, and sublime — to the most crass, horrific, and evil.

I’ve discussed all of these aspects of YouTube — and my both positive and negative critiques regarding how Google has dealt with them over time — in numerous past posts over the years. I won’t even bother listing them here — they’re easy to find via search.

I will note again though that — especially of late — Google has become very serious about dealing with inappropriate content on YouTube, including taking some steps that I and others have long been calling for, such as removal of dangerous “prank and dare” videos, demonetization and general form de-recommendation of false “conspiracy” videos, and just announced, demonetization and other utterly appropriate actions against dangerous “anti-vaccine” (aka “anti-vaxx”) videos. 

This must be an even more intense time than usual for the YouTube policy folks up in San Bruno at YouTube HQ — because over the last few days yet another massive controversy regarding YouTube has erupted, this time one that has been bubbling under the surface for a long time, and suddenly burst forth dramatically and rather confusingly as well, involving the “hijacking” of innocent YouTube videos’ comments by pedophiles.

YouTube comments are a fascinating example of often stark contrasts in action. Many YouTube viewers just watch the videos and ignore comments completely. Other viewers consider the comments to be at least as important as the videos themselves. Many YouTube uploaders — I’ll refer to them as creators going forward in this post — are effectively oblivious to comments even on their own videos — which, given that the default setting for YouTube videos is to permit comments without any moderation — has become an increasingly problematic issue.

My own policy (started as soon as the functionality to do so became available) has always been to set my own YouTube videos to “moderated” mode — I must approve individual comments before they can appear publicly. But that takes considerable work, even with relatively low viewership videos like mine. Most YouTube creators likely never change the default comments setting, so comments of all sorts can appear and accumulate largely unnoticed by most creators.

In fact, a few minutes ago when I took another look at that first YouTube video (“Me at the zoo”) to make sure that I had the date correct, I noticed that it now has (as I type this) about 1.64 million comments. Every 5 or 10 seconds a new comment pops up on there, virtually all of them either requests for viewers to subscribe to other YouTube channels, or various kinds of more traditional spams and scams.

Obviously, nobody is curating the comments on this historic video. And this is the same kind of situation that has led to the new controversy about pedophiles establishing a virtual “comments network” of innocent videos involving children. It’s safe to assume that the creators of those videos haven’t been paying attention to the evil comments accumulating on those videos, or might not even know how to remove or otherwise control them.

There have already been a bunch of rather wild claims made about this situation. Some have argued that YouTube’s suggestion engine is at fault for suggesting more similar videos that have then in turn had their own comments subverted. I disagree. The suggestion algorithm is merely recommending more innocent videos of the same type. These videos are not themselves at fault, the commenters are the problem. In fact, if YouTube videos didn’t have comments at all, evil persons could simply create comments on other (non-Google) sites that provided links to specific YouTube videos. 

It’s easy for some to suggest simply banning or massively restricting the use of comments on YouTube videos as a “quick fix” for this dilemma. But that would drastically curtail the usefulness of many righteous videos.

I’ve seen YouTube entertainment videos with fascinating comment threads from persons who worked on historic movies and television programs or were related to such persons. For “how-to” videos on YouTube — one of the most important and valuable categories of videos as far as I’m concerned — the comment threads often add enormous value to the videos themselves, as viewers interact about the videos and describe their own related ideas and experiences. The same can be said for many other categories of YouTube videos as well — comments can be part and parcel of what makes YouTube wonderful.

To deal with the current, highly publicized crisis involving comment abuse — which has seen some major advertisers pulling their ads from YouTube as a result — Google has been disabling comments on large numbers of videos, and is warning that if comments are turned back on by these video creators and comment abuse occurs again, demonetization and perhaps other actions against those videos may occur.

The result is an enormously complex situation, given that in this context we are talking almost entirely about innocent videos where the creators are themselves the victims of comment abuse, not the perpetrators of abuse.

While I’d anticipate that Google is working on methods to algorithmically better filter comments at scale to try help avoid these comment abuses going forward, this still likely creates a situation where comment abuse could in many cases be “weaponized” to target innocent individual YouTube creators and videos, to try trigger YouTube enforcement actions against those innocent parties.

This could easily create a terrible kind of Hobson’s choice. For safety’s sake, these innocent creators may be forced to disable comments completely, in the process eliminating much of the value of their videos to their viewers. On the other hand, many creators of high viewership videos simply don’t have the time or other resources to individually moderate every comment before it appears.

A significant restructuring of the YouTube comments ecosystem may be in order, to permit the valuable aspects of comments to continue on legitimate videos, while still reducing the probabilities of comment abuse as much as possible. 

Perhaps it might be necessary to consider the permanent changing of the default comments settings away from “allowed” — to either “not allowed” or “moderated” — for new uploads (at least for certain categories of videos), especially for new YouTube creators. But given that so many creators never change the defaults, the ultimate ramifications and possible unintended negative consequences of such a significant policy alteration appear difficult to predict. 

Improved tools to aid creators in moderating comments on high viewership videos would also seem to be in focus — perhaps by leveraging third-party services or trusted viewer communities.

There are a variety of other possible approaches as well.

It appears certain that both YouTube itself and YouTube creators have reached a critical crossroads, a junction that successfully navigating will likely require some significant changes going forward, if the greatness of YouTube and its vast positive possibilities for creators are to be maintained or grow.

–Lauren–