Understanding Google’s New Advanced Protection Program for Google Accounts


I’ve written many times about the importance of enabling 2-factor authentication on your Google accounts (and other accounts, where available) as a basic security measure, e.g. in “Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good” — https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi — and in other posts too numerous to list here.  

Given this history, I’ve now begun getting queries from readers regarding Google’s newly announced and very important “Advanced Protection Program” (APP) for Google accounts — most queries being variations on “Should I sign up for it?”

The APP description and “getting started” page is at:

https://landing.google.com/advancedprotection/

It’s a well designed page (except for the now usual atrocious low contrast Google text font) with lots of good information about this program. It really is a significant increase in security that ordinary users can choose to activate, and yes, it’s free (except for the cost of purchasing the required physical security keys, which are available from a variety of vendors).

But back to that question. Should you actually sign up for APP?

That depends.

For the vast majority of Google users, the answer is likely no, you probably don’t actually need it, given the additional operational restrictions that it imposes.

However, especially for high-profile users who are most likely to be subjected to specifically targeted account attacks, APP is pretty much exactly what you need, and will provide you with a level of account security typically unavailable to most (if any) users at other commercial sites.

Essentially, APP takes Google’s existing 2-factor paradigm and restricts it to only its highest security components. So while USB/Bluetooth security keys are the most secure option for conventional 2-factor use on Google accounts, other 2-factor options like SMS text messages (to name just one) continue to also be available. This provides maximum flexibility for most users, and minimizes the chances of their accidentally locking themselves out of their Google accounts.

APP requires the use of these security keys — the other options are no longer available. If you lose the keys, or can’t use them for some reason, you’ll need to use a special Google account recovery procedure that could take up to several days to complete — a rigorous process to assure that it’s really you trying to regain access to the account.

There are other security-conscious restrictions to your account as well if you enable APP. For example, third-party apps’ access to your account will be significantly restricted, preventing a range of situations where users might otherwise accidentally grant overly broad permissions from outside apps to Google accounts.

It’s important to remember that there do exist situations where you are likely to not be able to use security keys. Public computers (and ironically, computers in high security environments) often have unusable USB ports and have Bluetooth locked in a disabled mode. These can be important considerations for some users.

Cutting to the chase, Google’s standard 2-factor systems are usually going to be quite good enough for most users and offer maximum flexibility — of course only if you enable them — which, yeah, you really should have done by now!

But in special cases for particularly high-profile or otherwise vulnerable Google users, the Advanced Protection Program could be the proverbial godsend that’s exactly what you’ve been hoping for.

As always, feel free to contact me if you have any additional questions about this.

Be seeing you.

–Lauren–

Explaining the Chromebook Security Scare in Plain English: Don’t Panic!

Yesterday I pushed out to various of my venues a Google notice regarding a security vulnerability relating to a long list of Chrome OS based devices (that is, “CrOS” on Chromebooks and Chromeboxes). That notice (which is titled more like a firmware upgrade advisory than a security warning per se) is at:

https://sites.google.com/a/chromium.org/dev/chromium-os/tpm_firmware_update

While that page is generally very well written, it is still quite technical in its language. Unfortunately, while I thought it was important yesterday to disseminate it as quickly as possible, I was not in a position to write any significant additional commentary to accompany those postings at that time. 

Today my inbox is filled with concerned queries from Chromebook and Chromebox users regarding this issue, who found that Google page to be relatively opaque.

Does this bug apply to us? Should we rush to upgrade? What happens if something goes wrong? Should our school be concerned — we’ve got lots of students using Chromebooks, what should we do? Help!

Here’s the executive summary — perhaps the way that Google should have said it: DON’T PANIC! — especially if you have strong passwords. Most of you don’t really have to worry much about this one. But please do keep reading, especially and definitely if you’re a corporate user or someone else in a particularly high security environment.

This is not a large-scale attack vulnerability, where millions of devices can be easily compromised. In fact, even in worst case scenarios, the attack is computationally “expensive” — meaning that much more “targeted” attacks, e.g., against perceived “high-value” individuals, would be the focus.

Google has already take steps in their routine Chrome OS updates to mitigate some aspects of this problem and to make it an even less practical attack from the standpoint of most individual users, though the vulnerability cannot be completely closed via this approach for everyone.

The underlying problem is a flaw in the firmware (the programming) of a specific chip in these devices, called a TPM. Google didn’t expand that acronym in their notice, so I will — it stands for Trusted Platform Module.

The TPM is a crucial part of the cryptographic system that protects the data on Chrome OS devices. It’s sort of the “roach motel” of security chips — certain important crypto key data gets in there but can’t get out (yet can still be utilized appropriately by the system).

The TPM firmware flaw in question makes the possibility of “brute force” guessing of internal crypto keys more practical in a targeted sense, but again, not at large scale. And in fact, if you have a weak password, that’s a far greater vulnerability for most users than this TPM bug ever would have been. Google’s mitigations of this problem already provide good protection for most individual users with strong passwords.

C’mon, switch to a strong password already! You’ll sleep better.

It’s really in high security corporate environments and similar situations where the TPM flaw is of more concern, particularly where individual users may be reasonably expected to be targets of security attacks.

Where firms or other organizations are using their own crypto certificates via the TPM to allow corporate or other access (or use “Verified Access” for enterprise-managed authentication) the TPM bug is definitely worthy of quite serious consideration at least.

Ordinary users can upgrade their TPM firmware if they wish (in enterprise-managed environments, you will likely need administrative permission to perform this). The procedure uses the “powerwash” function of the devices, as explained on the Google page.

But as also noted there, this is not a risk-free procedure. Powerwash wipes all user data from the device, and devices can fail to boot if things go wrong during the process. There are usually ways to recover even from that eventuality, but you probably don’t want to be in that position if you can reasonably avoid it.

For the record, I am personally not upgrading the TPM firmware on the Chrome OS devices that I use or manage at this time. They all have decent passwords, and especially for remote users I won’t risk the powerwash sequence for now.

I am of course monitoring the situation and will re-evaluate as necessary. Google is working on a way to update the TPM firmware without a powerwash — if that comes to pass it will significantly change the equation. And of course if I had to use any of these devices in an environment where TPM-based crypto certificates were required, I’d consider a powerwash for TPM firmware upgrade to be a mandatory prerequisite.

In the meantime, be aware of the situation, think about it, but once again, don’t panic!

–Lauren–

Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–

Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home


UPDATE (October 17, 2017): Google Home, nearly a year after its initial release, finally has a real sleep timer! Some readers have speculated that this popular post that you’re viewing right here somehow “shamed” Google into final action on this. I wouldn’t go that far. But I’ll admit that it’s somewhat difficult to stop chuckling a bit right now. In any case, thanks to the Home team!

– – –

I’ve long been bitching about Google Home’s lack of a basic function that clock radios have had since at least the middle of the last century — the classic “sleep timer” for playing music until a specified time or until a specific interval has passed. I suspect my rants about this have become something of a chuckling point around Google by now.

Originally, sleep timer type commands weren’t recognized at all by GH, but eventually it started admitting that the concept at least exists.

A somewhat inconvenient but seemingly serviceable way to fake a sleep timer is now possible with Google Home. I plead guilty, it’s a hack. But here we go.

Officially, GH still responds with “Sleep timer is not yet supported” when you give commands like “Stop playing in an hour.”

BUT, a new “Night Mode” has appeared in GH firmware, at least since revision 99351 (I’m in the preview program, you may or may not have that revision yet, or it may have appeared earlier in some cases).

This new mode — in the device settings reachable through the Home app — permits you to specify a maximum volume level during specified days and hours. While the description doesn’t say this explicitly, it turns out that this affects music streams as well as announcements (except for alarms and timers). And, you can set the maximum volume for this mode to zero (or turn on the Night Mode “Do Not Disturb” setting, which appears to set the volume directly to zero).

This means that you can specify a Night Mode activation time — with volume set to minimum — when you want your fake “sleep timer” to shut down the audio. The stream will keep playing — using data of course — until the set Night Mode termination time or until you manually (e.g., by voice command) set a higher volume level (for example, in the morning). Then you can manually stop the stream if it’s still playing at that point.

Yep, a hack, but it works. And it’s the closest we’ve gotten to a real sleep timer on Google Home so far.

Feel free to contact me if you need more information about this.

–Lauren–

Major Porn Site’s Accessibility Efforts Put Google to Shame

You just can’t make this stuff up. By now you’ve perhaps become somewhat weary of my frequent discussions of Google’s growing accessibility failures, as their site documentation, blogs, and user interfaces continue to devolve in ways that severely disadvantage persons with less than perfect vision or who have other special needs — a rapidly growing category of users that Google just doesn’t seem to consider worthy of their attention. Please see:  “How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)” — https://lauren.vortex.com/2017/06/26/how-google-risks-court-actions-under-the-ada-americans-with-disabilities-act — and other posts linked therein.

Now comes word of a major site that really understands these issues — that really does appreciate these accessibility concerns and is moving in the correct direction, in sharp contrast (no pun intended) with Google.

Here’s the kicker — it’s a porn site — supposedly the planet’s largest “adult entertainment” site, in fact. While I’m not a user of the site myself, tech news publications have confirmed the details of the accessibility press release that Pornhub distributed a few days ago.

Pornhub has rolled out world-class accessibility options across its platform, including visual element changes, narrated videos, and a wide array of keyboard shortcuts. “Enhancing the ability to contrast colors or to flip the text color and the background color or things like that can be very helpful to people who have low vision, which means they’re legally and functionally blind but they have some vision left,” says Danielsen . “Maybe they’re not using text to speech or braille to read the site.”

Bingo. They get it. These are the kinds of options I’ve been urging Google to provide for ages for their desktop services, to no avail.

At first glance, one might wonder why the hell a pornography site would be able to figure this out while Google, compromising some of the smartest people on the planet, keeps moving in exactly the wrong direction when it comes to major accessibility concerns.

Perhaps the explanation is that Google is great at technology but not so great when it comes to understanding the needs of people who aren’t in their target demographics. 

On the other hand, a successful porn site must by definition understand what their users of all stripes want and need. Porn is very much a people-oriented product.

I’m still convinced that the great Googlers at Google can get this right, if they choose to do so and allocate sufficient resources to that end. 

You’re probably expecting some sort of pun to close this post. Accessibility is a serious issue, and when a porn site tops Google in this important area, that’s a matter for sober deliberation, not for screwing around. After all, sometimes a cigar is indeed just a cigar.

–Lauren–

How the Alt-Right Plans to Control Google

The European Union is threatening massive fines against firms like Google, Facebook, and Twitter if contents that the EU considers to be “forbidden” aren’t removed quickly enough. The EU (and now some non-EU countries) are demanding the right to impose global censorship on Google, proclaiming that nobody on Planet Earth can be permitted to view materials that these individual countries wish to hide from their own citizens’ eyes.

The U.S. Congress is feigning newfound horror at their sudden realization that yes, Russians did influence the 2016 elections, and is now suggesting that only our brilliant politicians and bureaucrats know how to fix the problem.

Meanwhile, the horned and spiked-tail demons of the alt-right like Steve Bannon are promoting a wet dream of converting firms like Google into “public utilities” — where search results would be micromanaged for the benefit of racist, sexist antisemites like themselves — and for his president apparently residing amidst a chalked pentagram in the Oval Office.

The common thread that defines this tapestry is third parties demanding to control what these firms are permitted to let you see, to strip these firms of their rights to decide what sorts of contents they do or not wish to host.

We’ve seen this attack ramping up for years. Russia and China are obvious offenders. China’s vast Internet censorship regime is without equal and is the model to which most other countries’ Internet censorship dreams aspire. Where the technology for censorship is less advanced, the reliable mechanism of nightmarish fear can be employed — like Thailand’s recent sentencing of a man to 35 years in prison for Facebook posts critical of their damnable monarchy.

We’ve watched the EU’s escalating demands for years, knowing full well that they’d never be satisfied without the powers of global censorship being bestowed unto them.

And now joining the information control chorus are those worst elements of the alt-right. They’re combining forces with an array of other parties who just can’t get it through their thick skulls that their calls for “search transparency and equality” would result in a lowering of search quality for users to an extent that you might as well try to pick out quality websites from an old copy of the Yellow Pages.

Their collective goal is to create a playground for the worst of low quality sites, scammers, crooks, racists, fake news purveyors, and the rest of their similarly decrepit lowlife scumbags.

The alt-right really started to engage on this when firms like Google and Facebook (and to a lesser extent Twitter) recently and wisely ramped up enforcement of their longstanding prohibitions against hate speech and associated garbage, and began seriously clamping down on fake propaganda search listings and posts.

This terrifies the alt-right. They’ve built their entire business model on leveraging these platforms to spew forth their hateful and lying bile, and feel threatened at the prospect of their diseased spigot being closed off. But they’re still smart enough to align their rants with those on the far left who similarly wish to impose their own viewpoints and censorship regimes onto the rest of us.

The results can be dripping with irony. The calls for making firms like Google “public utilities” are particularly laughable, especially given that right wing politicians have long fought against public utility designations for dominant ISPs — who have spent many decades carving out geographic physical fiefdoms void of competition — where their predatory pricing policies could be maintained.

Yet anyone on the planet who has Internet access can freely connect to firms like Google, Facebook, and Twitter — and use these firms’ services without charge — unless their own governments themselves try to block them! Not only is there no possible case for such firms to be considered as public utilities, but there is no historical precedent of any kind on which to base such a concept.

Once again, it’s all really about governments and bottom of the barrel miscreants trying impose information control on the rest of us.

The scammers and crooks want their sites high in search results. The racists and other hatemongers want to disseminate their filth without limits. Russian trolls squirm at the prospect of not being able to as easily illicitly influence future elections. Politicians dream of imposing ever more total global censorship.

None of these evil players want firms like Google to have the continued ability to control the data on their own platforms for the benefit of users overall and for the broader community.

It’s through their politically motivated, falsified “public interest” claims that the alt-right and other malevolent forces are plotting to control Google, Facebook, Twitter, and more. The thirst for control over these firms even transcends these groups’ individual political differences in many cases.

It is up to us to derail these plots, to not be taken in and rolled over by their propaganda and lies, irrespective of own political and social affiliations.

With strikingly few exceptions, pretty much every time that governments become involved in controversies relating to information control or technology policies, we find that politicians and their minions manage to royally screw up everything, often for everyone except (oh so conveniently) themselves.

We won’t be fooled again.

–Lauren–

Why Won’t Roku Talk About Their Privacy Policies?

UPDATE (November 4, 2017): I ultimately was able to get specific answers from Roku to my questions, via their corporate representatives. The bottom line is that based on that information, I do not consider Roku (or other popular streaming devices) to be suitable for the kind of applications described below, for a variety of reasons. I recommend non-networked, standalone media players (~$30 or less) and an ordinary HDMI cable for these situations.

– – –

Roku makes some excellent, inexpensive video streaming products. I actually have both a Roku Stick and a great Google Chromecast — they each have somewhat different best use cases.

Some days ago the chief security officer at a large firm contacted me with a question about a potential use for Roku units in a corporate environment. They already had Roku boxes or sticks on most of their meeting room monitors, and were concerned about a specific security/privacy issue.

Essentially, they were considering use of the existing Roku units — in conjunction with the Roku Media Player app available to download to those units — to display locally created video assets.

My immediate reaction was to discourage this — much preferring a method that was totally under their control with no chance of leakage outside their own networks — even if that meant direct wiring to the displays. But for a number of reasons he insisted that he wanted to explore the use of Rokus in this application.

Unfortunately, figuring out the privacy and security implications of such a course has so far proven to be nontrivial.

The lengthy online Roku privacy policies page goes into a great deal of detail concerning the information that they collect from your devices — Wi-Fi info, channel data, search data, etc. — all sorts of stuff related to viewing of “conventional” Roku-capable streaming channels.

But the Roku Media Player app is different. It doesn’t play external streams, it play your own video or audio files from your own local server. That Roku privacy page seems to make no specific mention of their Media Player at all.

So I went to the Roku Forum to ask what sorts of data — Usage info? Thumbnail images? EXIF or other metadata? Filenames? — would be collected by Roku (or other third parties) from Roku Media Player usage.

Nothing but crickets. No responses at all. Hmm.

Next, I sent a note with the same information request to the privacy email address that Roku specified for additional questions. 

Silence.

Then I asked on G+ and Twitter. A couple of retweets later, I was contacted by the Roku Support Twitter account. They suggested the privacy email address. When I told them that I’d already tried that, they suggested the Roku legal department email address.

You know where this is going. Still no reply at all.

At this stage I don’t know what’s up with Roku. Are they just so super busy that they can’t at least shoot out an acknowledgement of my queries? Or perhaps they’re scurrying around trying to figure out what their own Media Player actually does before replying to me at all. Or maybe they just hope that I’ll go away if they don’t acknowledge my email. (To paraphrase Bugs Bunny: “They don’t know me very well, do they?”)

To say that this state of affairs doesn’t exactly create a wellspring of confidence in Roku would be a significant understatement. 

Now I want to know the answers to my questions about Roku’s privacy policies irrespective of the query from that original firm that got this all started.

We shall see what transpires.

–Lauren–

When Google Gets Your Location Wrong!

Recently, Google’s desktop news began showing me the weather and local news for Detroit in the state of Michigan, rather than for my corner of Los Angeles as had been Google’s standard practice up to that point. And local Google desktop search results are suddenly all for Detroit instead of Los Angeles — not particularly useful to me.

Meanwhile, my Google Home unit, which always happily reported the weather for my local zip code, now thinks that I’m somewhere in Hawaii instead. And my Chromecast’s screensaver is showing current temperatures that don’t seem to match any of these locales.

What’s going on? Damned if I know! And it’s a real problem, because Google no longer provides any obvious means for you to correct these kinds of errors.

When I started asking around about this, I received a pile of responses from other Google users with similar problems. For some their locations are off a bit, for others way, way wrong, like in my case.

Since some users had actually traveled to those locations at some point in the past, it appears that Google somehow got “stuck” on those old locations. But in my situation, I’ve never been to either Detroit or Hawaii. In fact, I haven’t been out of my L.A. cage in years.

The one device where my location seems to be known correctly by Google at this time is my Android phone — and that’s because the location is being pulled from the phone itself (e.g., the GPS) — as Google itself notes at the bottom of results pages on my phone.

The bottom of those Google pages on desktop say that they’re getting my location from my Internet address. That’s quite bizarre, since that IP address is quite stable for months at a time, and more to the point the public IP address geolocation databases I’ve checked all correctly show me in L.A. (either the city in general or more specifically here in the West San Fernando Valley).

At the bottom of those Google pages there is a “Use precise location” link — but as far as I can tell it has no useful effect. Google keeps insisting that I’m in Detroit in all desktop results.

As for the wrong location data now apparently being used by Chromecast and being reported by Google Home … they just add a layer of confused frosting on top of the foundational cake of these annoying Google location errors.

I realize that there are people who make a hobby out of trying to hide their locations from Google — and that’s their choice. But personally, I value the location-based services that Google provides. It’s frustrating to me — and many other users — that Google does not provide some sort of explicit mechanism for us to update this location data when it goes wrong.

One thing’s for sure, I’m not moving to Detroit, or Hawaii. OK, if I had to choose, Detroit is a fine city, but I don’t do well in cold winters, so Hawaii would likely win out.

But since in reality I’m not planning a move from L.A., I’d sure appreciate Google setting my location as being where I actually am, rather than thousands of miles away.

–Lauren–

UPDATE (September 28, 2017): As of yesterday morning, Google had me “on the move” again. My Google desktop services IP address insisted that I was in “San Diego County” — my Google Home claimed that I was in Las Vegas! Well, “getting closer” (to paraphrase Bullwinkle). Then late last night Home switched to my correct location. This morning I found that desktop services now have my location correctly as well. Did the spacetime continuum shift? Did someone at Google hear me? We may never know.

Google’s Gmail Phishing Warnings and False Positives

Recently there have been messages from my policy-oriented mailing lists (at least one of my lists has been running for more than a quarter century) that Google’s Gmail (and its associated Inbox application) are tagging as likely phishing attempts — scary red warnings and all!

While I don’t yet understand the entirety of this situation, the circumstances behind one particular category of these seems clear, and I’ll admit that I chuckle a bit every time that I think about it now.

One might assume that with Google’s vast AI resources and presumably considerable reputation data relating to incoming mail characteristics, a sophisticated algorithm would be applied to pick out likely email phishing attempts.

In reality, at least in this case, it appears that Google is basically using the venerable old UNIX/Linux “grep” command or some equivalent, and in a rather slipshod way, too.

As you know, I discuss Google policy issues a great deal. Many Google users come to me in desperation for advice on Google-related problems. I write about Google technical matters frequently, as I explained in:

“The Google Account ‘Please Help Me!’ Flood” – https://lauren.vortex.com/2017/09/12/the-google-account-please-help-me-flood

One typical recent message of mine that’s been often getting tagged as a likely phish by Google was:

“Protecting Your Google Account from Personal Catastrophes” –
https://lauren.vortex.com/2017/09/07/protecting-your-google-account-from-personal-catastrophes

Google was apparently convinced that this message was likely a phish, and dramatically warned a subset of my list recipients of this determination.

But as you can see from the message itself, there’s nothing in there asking for users’ account credentials, nothing to suggest that it’s email attempting to fool the recipient in any way.

So why did Google think that this was likely a horrific phishing email?

Here’s why. First, my message had the audacity to mention “Google Account” or “Google Accounts” in the subject and/or body of the message. And secondly, one of my mailing lists is “google-issues” — so some (digest format) recipients received the email from “google-issues-request@vortex.com” (vortex.com is my main domain of very longstanding — it was one of the first 40 dot-com domains ever issued and I’ve been using it continually since then, more than 30 years). 

Note that the character string “google” is on the LEFT side of the @-sign. There’s nothing there trying to fool someone into thinking that the email came from “google.com” or from any other Google-related domain.

Apparently what we’re dealing with here is a simplistic (and frankly, rather haphazard in this respect at least) string-matching algorithm that could have come right out of the early 1970s!

I’ll add that I’ve never found a way to get Google to “whitelist” well-behaved senders against these kinds of errors, so some users see these false phishing warnings repeatedly. I’m certainly not going to change the names of my mailing lists or treat the term “Google Accounts” as somehow verboten!

Google of course wants Gmail to be as safe a user environment as possible, and in general they do a great job at this. But false positives for something as serious as phishing warnings is not a trivial matter — they can scare users into immediately deleting potentially useful or important messages unread, and sully the reputations of innocent senders.

If nothing else, Google needs to establish a formal procedure to deal with these kinds of errors so that demonstrably trustworthy senders can be appropriately whitelisted, rather than face these false positive warnings alarming their recipients repeatedly.

And a bit more sophistication in those phishing detection algorithms would be appreciated as well. 

In the meantime, I expect that some of you will again get Gmail phishing warnings — on THIS message. You know who you are. Sorry about that, Chief!

Oh, by the way, Google seems to have recently become convinced that I live either in Detroit or somewhere in Hawaii (I’ve never been to either). I’d probably prefer the latter over the former, but I’m still right here in L.A. as always. Unfortunately, there’s no obvious way these days to correct these kinds of Google location errors, even when your IP address clearly is correctly geolocating for everyone else — as mine is. If you’ve been having issues with Google-determined location being incorrect for you on desktop Google Search, on your phone, on Chromecasts, or with any other devices (e.g. Google Home), please let me know. Thanks.

–Lauren–

Solving the Gmail “Slow Startup” Problem

I’ve been fighting with slow Gmail startups — hanging starting a few seconds after page initialization and taking a minute or more to release — for quite some time. After some testing with Googler Colm Buckley today, we’ve determined that the problem — in my case at least — was apparently the Hangouts chat panel enabled on the left lower side of the Gmail window.

This appears to be a particular problem when running the Chrome browser. While I’ve also long used the excellent Chrome Hangouts extension, I’ve found the Gmail chat panel handy to keep tabs on the current “presence” status of frequent contacts without having to leave the Hangouts extension window open as well.

As soon as I disabled Chat from the Gmail (gear) settings, the hangs appear to have so far ceased. If you’ve been seeing a similar problem with Gmail, you might want to try this solution. My guess is that Gmail’s old chat panel is on the way toward being deprecated out of existence in any case. Thanks again Colm!

–Lauren–

Google’s Stake Through the Hearts of Obnoxious Autoplay Videos

Yesterday, in “Apple’s New Cookie Policy Looks Like a Potential Disaster for Users” — https://lauren.vortex.com/2017/09/14/apples-new-cookie-policy-looks-like-a-potential-disaster-for-users — I lambasted Apple’s plans to unilaterally deeply tamper with basic Web cookie mechanisms (including first-party cookies) in a manner that won’t actually provide significant new privacy to users in the long run, but will likely create major collateral damage to innocent sites across the Internet. 

I also mentioned that in my view Google has taken a much more rational approach — focused on specific content issues without breaking fundamental network paradigms — and in that context I mentioned their plans to tame obnoxious autoplay videos.

We all know about those videos — often ads — that start  blaring from your speakers as soon as you hit a site. Or even worse, videos that lurk silently on background tabs for some period of time and then suddenly blare at you — often with loud obnoxious music. Your head hits the wall behind you. Your coworkers scatter. Your cat violently pops into the air and contemplates horrific methods of revenge.

As it happens, Google has just blogged on this topic, with a rather mundane post title covering some pretty exciting upcoming changes to their Chrome browser.

In “Unified Autoplay” — https://blog.chromium.org/2017/09/unified-autoplay.html — Google describes in broad terms its planned methodologies for automatically avoiding autoplay in situations where users are unlikely to want autoplay active, and also for providing to users the ability to mute videos manually on a per-site basis.

Frankly, I’ve been long lobbying Google for some way to deal with these issues, and I’m very pleased to see that they’ve gone way beyond a basic functionality by implementing a truly comprehensive approach.

For most users, once this stuff hits Chrome you probably won’t need to take any manual actions at all to be satisfied with the results. If you’re interested in the rather fascinating technical details, there are two documents that you might wish to visit.

Over on Chromium Projects, the write-up “Audio/Video – Autoplay” — https://sites.google.com/a/chromium.org/dev/audio-video/autoplay — goes into a great deal of the nitty-gritty, including the timeline for release of these features to various versions of Chrome.

Another document — “Media Engagement Index” — https://docs.google.com/document/d/1_278v_plodvgtXSgnEJ0yjZJLg14Ogf-ekAFNymAJoU/edit?usp=sharing — explains the learning and deployment methodologies for determining when a user is likely to want autoplay for any given video. This appears to have probably been an internal Google doc — that was switched to public visibility at some point — so it’s especially Googley reading.

There are two important stakeholder categories here. One is well-behaved sites who need to display their videos (including ads — after all, ads are what keep most major free sites running). And of course, the other stakeholder is that user who doesn’t want their lap ripped open by the claws of a kitty who was suddenly terrified by an obnoxious and unwanted autoplay video.

The proof will be in actually using these new Chrome features. But it appears that Google has struck a good working balance for a complex equation incorporating both site and user needs. My kudos to the teams.

–Lauren–

Apple’s New Cookie Policy Looks Like a Potential Disaster for Users

UPDATE (September 15, 2017): Google’s Stake Through the Hearts of Obnoxious Autoplay Videos

– – –

Apple wants to play Big Brother. Really Big Brother. Big Brother who knows oh so much more than you do about what you want from your web browsing experience. Apple’s plans for this hostile takeover were actually laid out publicly last June, but the you-know-what is just starting to really hit the fan now.

This is going to eventually sock you in the face if you use Apple’s Safari browser, or even other browsers like Google’s Chrome on iOS 11 devices such as the iPhone (those non-Apple browsers still must use Apple’s WebKit framework on iOS). 

This gets very technical very quickly, so I’m going to try leave the techie part aside for now as much as possible, and lay out in broad strokes the mess that Apple is about to create for its users — and for the broader Internet.

In a nutshell, Apple has created a nightmarish witch’s brew of a system to ostensibly protect users from web cookies. In the process, they’re going to breaking stuff left, right, up, down, and in directions you’d need more than three dimensions to describe.

Most of us (except for European Union bureaucrats) are long since past abject and unreasoning fear of web cookies. While they can be abused, they’re also critical for routine operations at most sites, including such basic functions as persistent logins and a long list of other crucial functions. 

Up until now, it has been generally the case that “first-party” cookies — cookies sent by the same site that you’re browsing — are generally considered to be safe. “Third-party” cookies — coming from other sites — may be completely safe as well (delivering images, enabling cross-site logins, and much more), though they can also have a more checkered reputation when used for tracking purposes (so various controls on third-party cookies have become relatively common).

But now Apple, in a move that clearly seems to be based more on their public relations needs then on genuine concerns about user privacy, will apparently also be taking default control of first-party cookies as well, in a manner that could unleash vast collateral damage across the Internet.

Advertising groups are livid, fearing that the new system will decimate even user opt-in ad personalization systems, and end up favoring ads via sites like Facebook and Google where users tend to stay logged in perpetually.

And indeed, an examination of Apple’s specs for their new cookie control system — even after multiple readings — is enough to give you a headache for the ages. Since we hopefully can agree that consistent rules regarding cookie management are important to making modern websites work, then we should also be able to agree that a plan to throw a unilateral monkey wrench into that paradigm is a recipe for user confusion across the board.

Apple’s plan is basically to use an enormously complicated (and basically opaque) system to “mystically divine” whether particular cookies are good or evil, irrespective of how they were served to the user, and then apply Apple’s own rules about how those cookies may be used and how long they may persist, based on (for example) whether you’ve visited a site in the last 24 hours for one classification, or in the last 30 days for another. (Why 24 hours? Why 30 days? ‘Cause Apple says so.)

I don’t have any love for abusive Web ads or secretive tracking — but we also must understand that ads are what pay for most of the Web sites that we expect to use for free. Apple’s approach is incredibly heavy-handed and primed for all manner of creepy undesirable breakages and other negative side-effects affecting honest sites. 

Contrast this with Google’s much more sensible plan to by default block some particular classes of ads in Chrome (obnoxious autoplay videos for example), rather than tampering with the underlying cookie mechanisms on which the foundational structure of most websites now depend.

In the end of course, the real bad players that Apple claims are its focus will figure out ways to work around Apple’s system, leaving the good websites to deal with broken cookies and confused, upset users.

Back to its earliest Steve Jobs days, Apple has always been a control freak. “Our hardware! Our OS! You’ll pay through the nose — and you’ll convince yourself that you like it!”

As far as I can discern right now, Apple’s new cookie control scheme is much less about user privacy than it is about Apple trying to take control over basic Internet functionalities — everyone else be damned.

–Lauren–

The Google Account “Please Help Me!” Flood

Since I again started discussing how to protect Google Accounts — e.g. very recently in “Protecting Your Google Account from Personal Catastrophes” — https://lauren.vortex.com/2017/09/07/protecting-your-google-account-from-personal-catastrophes — I’ve been flooded with queries by Google users with confusion over Google Account issues of all sorts.

Most of them have indeed never heard of Takeout or Inactive Account Manager and many are confused about account recovery numbers and addresses, 2-factor setups, usage, and much more.

I even got a note from a Googler thanking me for that article, noting that he had never even heard of Inactive Account Manager himself!

At last count, I have over 140 specific queries (and rapidly rising) on these topics from just the last few days that I’m trying to triage. I can handle most of these through explanations myself — I always try to help where I can — but frankly it’s extremely time consuming — and doesn’t help to keep the lights on around here.

And it’s just so very wrong that I’m doing this, rather than Google having a staffer filling this kind of role to take care of these Google users — these people in desperate need of such assistance.

I know the excuses and I know the scaling concerns but it’s shameful nonetheless. If I can do this much myself from the outside, surely Google has the resources to get somebody to do as much from the inside, who is actually getting paid for their efforts at public outreach and assisting these dedicated Google users?

I’m sure it’s not a matter of money for Google. They just need to truly care about their users who depend on Google just like the rest of us, but who are being rapidly left behind under the status quo.

C’mon Google! You can do this!

–Lauren–

Protecting Your Google Account from Personal Catastrophes

UPDATE (September 12, 2017): The Google Account “Please Help Me!” Flood

– – –

In response to many queries, I’ve written quite a bit about issues that can sometimes go wrong with Google Accounts, and how to proactively help to avoid these situations, e.g.:

“The Saga of a Locked-Out Google User” – https://lauren.vortex.com/2017/09/05/the-saga-of-a-locked-out-google-user

“I’ve been locked out of my Google account! What can I do? How can I prevent this in the future? HELP!” – https://lauren.vortex.com/archive/001159.html

“Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good.” – https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi

Yet while Google Account problems can sometimes occur despite users’ best efforts, proper use of the tools and systems that Google already provides can go a long way toward avoiding these unfortunate events — with use of recovery addresses/mobile phone numbers, and 2-factor authentication tools among the most important. Unfortunately, many users don’t bother to pay attention to these until *after* they’re having problems.

There are other extremely useful Google tools for protecting your Google Account as well, and like so many good things Google, the firm (for reasons difficult for many observers to fathom) doesn’t always do a particularly good job of publicizing these — demonstrated by the fact that so many even long-time Google users don’t even know that these exist until I mention them. Let’s cover a few of these.

A biggie is Google Takeout, at:

https://google.com/takeout

This is an incredible resource, providing the capability for you to download virtually all of your data stored at Google — selectively or en masse — across the wide range of Google services. This is a world-class tool — if only every other firm offered something like this. You can download your data to take it elsewhere, or just on general principles if you prefer. It’s up to you. The next time that some Google Hater starts ranting the lie that Google somehow locks up your data, you’ll know how to respond to them.  

One limitation to Takeout is that you must use it while you still have access to your Google Account. If you’re locked out or otherwise unable to use the account, you can’t access Takeout to reach your data.

So what happens to your data if you’re in an accident, or become ill, or worse? Nobody likes to think about these sorts of possibilities, but they’re very real.

Google’s “Inactive Account Manager” is the tool that lets you proactively plan for such situations:

https://support.google.com/accounts/answer/3036546

This tool lets you designate a Trusted Contact who will have access to the parts of your Google data that you specify, if your Google Account becomes inactive for a period of time that you indicate. With so much of our lives online now, this is an extremely important tool that you’ve likely never heard of before. 

But remember, like with Takeout, you must set it up *before* the need to actually use it arises.

Related to Inactive Account Manager, there is another Google Accounts associated link that none of us ever wants to visit, though realistically many of us may eventually need to.

A Google form to “Submit a request regarding a deceased user’s account” exists at:

https://support.google.com/accounts/troubleshooter/6357590 

Its purpose is self-explanatory, and as it notes, proactive use of Inactive Account Manager can avoid needing this form in many situations — but Google has provided this form as a means to communicate with them directly in these circumstances when necessary.

Google has obviously given a lot of thought to these issues, and their teams have put a lot of work into implementations and deployments of associated services and tools. 

My primary criticisms in this context are that despite these excellent efforts, too many honest users still fall through the cracks and become trapped in account lockout situations through no faults of their own — and often with no perceived practical recourse — and that Google often does a poor job of publicizing the high quality tools that they have already created to deal with a range of user account issues.

Google’s technology is always excellent. Their public communications, outreach, and user support — especially for non-techie users — can be significantly less so.

One thing is certain. Google and its immensely talented Googlers have the capacity to significantly improve in these latter three areas, given the will to do so and an appropriate allocation of resources to these ends.

I have faith that Google will ultimately accomplish this, in the interests of Google itself, for their vast numbers of users, and toward the betterment of the community at large.

–Lauren–

The Saga of a Locked-Out Google User

ALSO (March 25, 2016): I’ve been locked out of my Google account! What can I do? How can I prevent this in the future? HELP! 

– – –

With the help of the Google Accounts team (thanks!), whom I reached through my informal channels at Google, a desperate user who contacted me — locked out of her G account since before the Labor Day weekend — including all of her associated personal and business data — has now been restored to full access.

This is by no means the first time that I’ve been involved in such a situation. In fact I’ve proceeded this way on multiple occasions when Google users reach out to me in desperation, after failing with all of the “normal” Google account recovery methods — through no fault of their own.

I am glad to help when I can, but as I emphasize to them, I do not currently have any official connection with Google, and I cannot guarantee any particular results.

Even more to the point, I shouldn’t be needed to do this at all!

Google Account recovery procedures and appeal flows should be designed to deal with these situations correctly in the first place.

It’s wrong that users feel it necessary to come to me with these kinds of Google problems, having gotten my name from their friends, web pages, or stories they heard about on radio.

By the time that they reach me, they’re upset beyond measure, and feel that Google has abandoned them.

Google can do a whole lot better!

Again, thanks very much to the Accounts Team and everyone else at G who helped to get this user back online with her Google account.

–Lauren–

The Vile Monster Donald Trump Stabs the DACA Dreamers in their Backs

Donald Trump has now clearly revealed himself to be the monster that so many of us have long suspected him to be. A vile, racist, lying creature of evil.

Remember how he kept saying that the DACA Dreamers would be OK? These are 800 thousand kids who know no home other than the USA — brought here very young by their parents and not of their own free will. Kids who have grown into productive jobs of all kinds — and in our own military protecting our country. Kids who have provided the government with their personal information because they were promised that it would never be used against them.

Remember how Trump repeatedly said that they had nothing to worry about? That he was going to show them great heart?

That’s all now exposed as the worst kind of lies by a man who isn’t even really a man any more — he’s the Gollum of politics, a perverted little creature whose mind has been twisted by greed and power. Trump is such a quivering coward that he sent out his minion Jeff Sessions to announce the termination of DACA — rather than showing up to make the announcement himself.

The GOP must cast him out of the White House as the vermin he is, a malignant disease in the body of this great country. He must be banished back to his world of real estate cons and gold-plated toilets.

And Congress must act to permanently protect the DACA Dreamers. Right now.

Or the same ignominious fate awaiting Trump awaits you all as well.

You can count on it.

–Lauren–

How Twitter Bends over Backwards to Keep Hate Speech Online

Plowing through my inbox this morning, I came upon a disturbing message from a person asking for my help in dealing with a racist, antisemitic Twitter user.

This Twitter user — self-identified as being in South Africa — had tweeted that he considered Jews being processed into lamp shades and soap as positive aspects of the Holocaust.

Twitter’s Terms of Service seem fairly explicit on this score:

Examples of what we do not tolerate includes, but is not limited to behavior that harasses individuals or groups of people with … references to mass murder, violent events, or specific means of violence in which/with which such groups have been the primary targets or victims …

Obviously that South African Twitter user’s tweet falls squarely into this category.

Yet my correspondent insists that they’ve been reporting that user to no avail — the vile tweet is still online.

Or is it?

I can definitely see it from here in L.A.

But when I noted this situation on Google+, within minutes a follower in Germany commented that he couldn’t see it. In fact, it’s specifically marked by Twitter as being “withheld” from him.

He graciously performed a few experiments with a VPN and quickly verified what we both had been suspecting.

Twitter appears to be geoblocking that hate speech in Germany, where strong laws against such speech are on the books, but is permitting that same hate speech to appear elsewhere, even though it clearly is in violation of Twitter’s own stated Terms of Service.

Effectively, Twitter is playing the complicit stooge with this disgusting Twitter user, “bending over backwards” to assure that their antisemitic garbage gets the widest possible global audience, while not running afoul of Germany’s specific laws.

This is a disgrace. It is yet another example of Twitter’s apparent willingness to give racists, antisemites, sexists, bullies, and other purveyors of hateful evil every possible benefit of the doubt.

For all of their talk, it’s clear that in key respects Twitter is still voluntarily tolerating obvious hate on their platform.

Twitter’s management should be ashamed of itself. Twitter’s employees are being humiliated. And the company’s stockholders should feel mortified.

–Lauren–

Why Google Doesn’t Promote Their Great “My Activity” Feature

A couple of evenings ago, during a discussion of Google issues on the national radio venue where I frequently guest, I urged listeners to visit Google’s excellent “My Activity” feature (https://google.com/myactivity).

I’m used to getting “Now I understand, why didn’t anybody ever explain this to me that way before?” emails after my Google discussions, but the response to my mentioning My Activity was very strong and somewhat different, more like “Wow, this is great. Why the hell does Google hide this feature?”

Google doesn’t actually hide it, but the number of persons noting that they’d never before heard of My Activity got me thinking.

In fact, it’s not just radio audiences who seem to be largely unaware of MA, but it’s interesting how many highly technical, long-term Google users have expressed surprise when I’ve mentioned it to them — they were unaware of it also.

And this is really a shame, because MA is a fantastic tool, providing world-class access and control to users for their data on Google, in a comprehensive form that puts most other Internet firms to shame.

I’ve discussed MA in some detail previously, e.g.:

The Google Page That Google Haters Don’t Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google’s “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Yet this still begs the question. While references to MA show up in various Google services and help pages, there’s no evidence that I know of to suggest that Google ever has deployed a serious continuing “outreach” to the public in general to make them aware of MA — and so it appears that most people still don’t even realize that such an important feature exists — a feature, I might add, that directly retorts the false propaganda of Google Haters.

And therein may be the clue to this mystery. 

Google seems to have — perhaps since its earliest days — a deeply ingrained institutional fear of “Streisand Effect” blowbacks. Google seems to often believe that even having utterly false, fake, damaging propaganda about Google being widely circulated is somehow less of a risk than being upfront and direct about complex issues, even though Google is entirely on the side of the angels in those issues.

Here’s my guess. I suspect that Google is concerned that too much attention to the comprehensive nature of MA would cause too many users to become concerned regarding the scope of user data being presented, even though MA provides users with the ability not only to view and delete that data as they wish, but also to indicate their ongoing Google data collection preferences.

Google may be concerned that users will be “creeped out” by seeing their search and other activity histories in detail, even though those users are being given complete control over that data in the process.

Obviously I don’t know that these are actually Google’s concerns regarding MA. But to the extent that they might be, I would consider such concerns to be misguided at best, and not beneficial to Google or its users.

I base this largely on the sorts of experiences I noted above. When I “reveal” MA to people — techies and non-techies alike — the response I get is almost always the same — enthusiastic approval. 

And I think that the reason for this is fairly obvious. Most Internet users already assume that a lot of data is being collected from them in the course of providing the services that they depend upon. That horse is long since out of the barn.

The key question now is the degree of control that firms provide users over that data — and this is where Google’s My Activity shines so very brightly as a tremendously user-positive feature.

But yes, people need to know about it before they can use it!

All else being equal, one might assume Google preferring that users delete as little of their data as possible. The more data Google has, the better they can customize services, train machine learning algorithms, and perform other functions that benefit both Google and its users. 

But I would argue that overall, the benefits all around of widespread awareness and use of My Activity far outweigh any perceived negatives, and that, frankly, Google should be out there promoting its availability widely — not depending on third parties like me to sing its praises publicly.

Google will be 20 years old next year. It’s time for Google to outgrow its youthful fears of Streisand attacks around so many corners. Googlers do great work and Google is a great company. Google should fully embrace the ability of the public to appreciate what Google does, rather than so often treating the public as something to be somehow feared.

–Lauren–