Google, Personal Information, and Star Trek

Rarely does a day go by when I don’t get an email from some outraged soul who has seen on some wacky site — or perhaps heard on a right-wing radio program somewhere — the lie that Google sells users’ personal information to advertisers. I got a phone call from one such person very recently — an individual who hardly would calm down enough for me to explain that they’d been bamboozled by the Google Haters.

‘Cause Google doesn’t sell your data. Not to advertisers, not to anyone else. But the falsehood that they do so is one of the most enduring of fabrications about Google.

To be sure, Google is partly responsible for the long life of this legend, because frankly they’ve never done a really good job of explaining for non-techies how the Google ad system works, and Google ad relevance is often so accurate that users naturally assume (again, falsely) that their browsing habits or other data were handed over to third parties.

Here’s what actually happens. Let’s say that you work in warp engine design and testing. So you’re frequently using Google to search for stuff like antimatter injectors and dilithium crystals.

Now you start seeing “top of page” search results ads from some mining operation on Rigel XII for exactly the raw crystals that you need, and at an attractive price with free shipping, too! (Yes Trekkies, I realize that in this early episode they were actually referred to as “lithium” crystals — go tell it to Spock.)

But you wonder: Did Google provide my search history to those ragtag and somewhat disreputable bachelor miners — out there on a planet that is so windy that you clean pots by hanging them out to be sandblasted?

How else could that ad have been targeted to me?

The answer is simple, and you don’t need a dose of Venus Drug to understand it. (OK, happy now, Trekkies?)

The miners create an ad that is aimed at users who are looking for warp drive paraphernalia, based on the logical keywords — like dilithium, for example.

When Google’s ad personalization algorithms detect that your search terms are relevant to that ad, Google displays it to you. The miners back on Rigel XII don’t even know that you exist at this point. They didn’t display the ad to you, Google did.

Now, if you proceed to click on their ad and visit the miners’ sale site, you’ll be providing more information to them, much as you would when visiting other sites around the Web.

But if you don’t click on the ad, there’s no connection between you and the advertiser.

And you don’t have to simply accept Google’s default handling of ad personalization. Over at:

https://adssettings.google.com

you can change Google ad personalization settings or even disable ad personalization entirely.

So the next time that someone tries to fervently sell you the big lie that Google is selling your personal data, tell them that they’re wrong and that they’re a stick in the Mudd.

Be seeing you.

–Lauren–

To Protect Global Free Speech, Google May Need to Take Some Drastic Actions

Eleven and half or so years ago, a younger and more darkly bearded version of myself gave an invited talk at Google’s L.A. offices that I called “Internet & Empires” (https://www.youtube.com/watch?v=PGoSpmv9ZVc). Things were still pretty new there — I believe I was the first external speaker that they taped, and since there was no podium yet I presented the talk while sitting on the edge of a table (which actually turns out to work pretty well).

The talk had been scheduled well in advance, so it was a total coincidence that Google had earlier that day announced their (ultimately ill-fated) agreement with China to censor Chinese search results as demanded by the Chinese government.

I had already planned to talk about topics such as censorship and net neutrality. I even had managed to work in a somewhat pithy reference to the classic 1956 sci-fi film “Forbidden Planet” and the downfall of the Krell.

Back at the time of that talk, I was fairly critical of Google’s privacy and data management practices in some key respects. In ensuing years, Google evolved into a world-class champion for data privacy, user control over data, data transparency, and data portability. I’ve been honored to work with them and to put considerable thought into the complex ways that Google-related issues can be seen as proxies for critical policy issues affecting the entire Internet.

During the talk, I mentioned the newly announced China situation. I explained that while I understood the reasoning behind the decision to launch a censored version of Google Search for China (essentially, that some access to Google Search was better than none, and might help push China toward reforms), I suspected that this effort would end badly.

My main concern was based on history. Once authorities and governments start down the censorship path, they virtually always attempt to expand its reach, both in terms of content and geography. Government censorship is in many ways the classic example of the “camel’s nose under the tent” — you almost inevitably end up with a complete camel smashing everything inside.

And so it was with China and Google. China kept demanding more and more control, more and more censorship. Ultimately, Google reversed their decision, and wisely ceased participation in China’s vast censorship regime. Some other firms have not been as ethical as Google in this regard, and are still kowtowing to China’s censorship czars.

Fast forward to today. Depressingly, we find that in major respects the censorship and net neutrality issues that I discussed more than a decade earlier are in even worse shape now.

Dominant ISPs have been using dishonest political gamesmanship — often outright lies — to trample net neutrality, as if they weren’t already raking in the dough from often captive subscribers.

And in the censorship realm, the threats are more ominous than ever — not just from totalitarian countries like China or Russia, but from western countries as well — like Canada. Like France. And more broadly, from the European Union itself.

Today we’ve learned that Apple has reportedly surrendered to Chinese officials and has suddenly removed VPN apps from the Chinese users’ version of Apple’s App Store. These apps are crucial not only to the free speech of Chinese users but also in many situations to their physical safety in that dictatorial regime.

In some countries, a single Facebook post deemed to be critical of the local royal or elected despot — or other government officials — can trigger decades-long prison sentences.

And even in the so-called “enlightened” western environs of Canada, France, and the European Union more generally, domestic officials are attempting to impose global censorship over Google search results (via the horrific “Right To Be Forgotten” and other twisted means) — all in an effort to each become censors dictating what everyone else on this planet can see.

Success in such efforts would result in a lowest common denominator rush to the bottom, with politicians and other leaders around the world all attempting to cleanse search results of any materials that they find to be politically or otherwise personally offensive — or even simply inconvenient.

Unfortunately, all of this is very much in keeping with the predictions that I made in that Google talk years ago.

And here’s a new prediction. While Google will valiantly battle these oppressive forces in courts, in the long run the masters of censorship will continue to expand their choking grip on free speech globally, unless more drastic measures are deployed by free speech champions.

Imagine that you own a large store stocked with all manner of merchandise for a wide variety of customers. Now let’s say that you had some customers who insisted that they wanted to continue patronizing your store, but that they personally disapproved of various items that you stocked, and demanded that you remove them — even though those items were still very important to the vast majority of your other customers.

Most likely you’d tell the customers making those demands to either grow up — realize that they are not the only fish in the sea — or to take their business elsewhere. Period.

This is very much the kind of situation that Google and various other large Internet service firms are now facing. Users around the world demand access to the services that these firms provide, but increasingly their own governments are demanding to dictate not only what users in their own countries can access and see, but are also demanding the right to censor other users everywhere on Earth.

Here’s my admittedly drastic proposal to deal with these scenarios: Cut those countries off from the associated services. No more Facebook, no more Google Search or Gmail for them. No more cloud services. And so on.

Let these countries’ leaders deal directly with their citizens who would no longer have access to the global services on which they’ve come to depend for their business and personal communications, entertainment, and much more in their daily lives.

Tough love? You betcha. But this could end up being necessary.

If the would-be global censorship czars can’t behave like decent 21st century adults, with an understanding that they do not have the right to dictate planetary content controls, then let them build their own services in their own countries using their own money — but no longer would they be permitted to leverage our services to dictate terms to the rest of us.

Obviously, given the vast sums of money at stake, taking such a path would be a very difficult decision for these firms. But I would assert that permitting domestic governments veto power over your global services will be absolutely deadly in the long run, and that the time to stamp out this malignancy is now, before it spreads even more and has achieved a veneer of a new, repressive status quo.

In fact, the odds are that serious threats of service cutoffs would likely serve to cause some major rethinking in government circles, well before actual cutoffs would be necessary.

The Chinese “death by a thousand cuts” torture seems applicable here. Given escalating censorship trends, it’s difficult to postulate how to successfully fight this scourge through litigation alone in the long run. Meanwhile, individual censorship orders are likely to expand massively both in scope and number, eating away ever more of global free speech by increasing degrees each and every time.

While continuing to fight this trend in the courts is of course an appropriate primary tactic, I’m ever more convinced that the sorts of drastic actions outlined above — details to be determined — should be under consideration now, so that rapid deployment is possible if current censorship trends continue unabated.

It is indeed extremely unfortunate that we’ve reached the point where actions such as these must even be seriously contemplated, but that’s the reality that we now are facing.

Be seeing you.

–Lauren–

Google Introduces “Invisible” Gmail Messages!

A Google “Project Fi” user contacted me on Google+ this afternoon, expressing his extreme displeasure at a Gmail message (please see image below) that he received a week or so ago from that project. His comment: “I’m sure they’re trying to tell me something, but I can’t really read it.”

Not surprisingly, it’s in the dandy new Google low-contrast font style — oh so pretty and oh so useless to anyone with less than perfect vision.

Perhaps he saw my recent post “How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)” — https://lauren.vortex.com/2017/06/26/how-google-risks-court-actions-under-the-ada-americans-with-disabilities-act — in any case he thought that I might be able to help with this overall accessibility issue.

Well, I’m willing to keep writing and talking about this until I’m Google blue, green, yellow, and red in the face — but so far, I’m not having much luck.

I’ll keep on tryin’ though!

–Lauren–

How Twitter Killed My Twitter Engagement by Killing Email Notifications

I don’t currently use Twitter all that much — it’s become loaded down with too much timeline content in which I’m utterly disinterested, and I’m far happier with ad-free Google+ (https://google.com/+laurenweinstein). But I do post on Twitter those items that I hope are interesting, nearly every day, via https://twitter.com/laurenweinstein.

I’m also more than happy to stay engaged with my Twitter followers. Lately though, a number of them have been emailing me and contacting me through other means, asking why I’ve been ignoring their replies and other routine Twitter-based interactions.

The reason is simple — Twitter doesn’t tell me about you any more. At least, not in a way that’s useful to me.

Until early this month, Twitter would send me a short, individual notification email message for mentions, replies, new follows, retweets, and likes. These were easy for me to scan using my available tools as part of my normal email workflow.

But now, Twitter no longer sends these individual notifications. Instead, once a day I receive an utterly useless “digest” from them, only providing me with the counts in each of those categories. No clue as to the contents. For example, the one I received today looks like this:

– 6 new followers – See them
> https://twitter.com/i/notifications
– 32 likes – See them
> https://twitter.com/i/notifications
– 3 replies – See them
> https://twitter.com/i/notifications
– 29 Retweets – See them
> https://twitter.com/i/notifications
– 5 mentions – See them
> https://twitter.com/i/notifications

A Twitter help page claims that this is to reduce my “email clutter.” It wasn’t clutter to me, it was how I stayed on top of my Twitter activities. 

Pretty clearly, this change was made to reduce Twitter’s email load, and to try drive users more frequently back to their site.

Frankly, I don’t have the time to keep running back to that one ridiculously long page on their site to plow through all of those notifications, which are stuffed in there like a Thanksgiving turkey. I presume this isn’t a big problem for folks who live on Twitter all day long, but it’s a total no-op for me.

So unless somebody knows of a way to get those individual email notifications back again (screen scraping apps, perhaps?) you can safely assume that your Twitter interactions with me will almost certainly be going into a black hole for now.

And that’s really a shame. Or to put it another way — shame on you, Twitter.

–Lauren–

Google Asked Me How I’d Fix Chrome Remote Desktop — Here’s How!

Since my posting a few days ago of “Another Google Accessibility Failure: Chrome Remote Desktop” — https://lauren.vortex.com/2017/07/21/google-accessibility-failure-crd — I’ve been contacted by a number of Googlers whom I know, asking me specifically how I’d address the accessibility problems that I noted therein. These queries were all friendly of course — not of the “put up or shut up” variety!

OK, I’ll bite. And Google can have this one for free — but like I’ve said before, this isn’t really rocket science.

Before I begin, I’ll answer another question that a number of readers have posed to me in response to that same post.

Why do I think that Google has so long ignored these sorts of problems with Chrome Remote Desktop (and various other of their products, for that matter)?

Without addressing Chrome Remote Desktop (CRD) or Google products specifically in this context, I’ll offer one possible explanation.

Around the information tech industry, it just isn’t usually considered to be career enhancing to be working on older software fixes and/or improvements, even when that application is fairly important and widely used.

By and large, most software engineers feel that rising on the career ladder is facilitated by working on new and “sexy” projects, not by being assigned to the “maintenance detail” — so to speak.

That’s a difficult corporate cultural problem, but a very important one to solve.

All right, let’s get back to Chrome Remote Desktop.

Here’s how I would fix the most glaring accessibility problem that I’ve noted regarding CRD operations — the damned “share mode ten minute timeout” (which, as I’ve noted in the referenced post above, actually can push users into terrible security practices when they attempt to work around the sorts of problems that the timeout causes — please see that previous post for details about this).

There are various somewhat complex ways to approach this issue — such as permitting the local user (the user who is sharing their screen with a remote user) to specify a desired timeout interval.

But the most straightforward and likely most useful approach for now in most cases would be to permit the local user to simply disable that timeout at the start of individual sessions, for the duration of those sessions.

Currently, when a local user provides a one-time CRD access code to a remote user, and the remote user attempts to connect using that code, the local user is presented with a dialogue box (which varies a bit across operating system platforms) that typically looks like this:

Obviously, only the local user can interact with this dialogue. They click “Share” if they wish to accept the connection.

And this is the obvious location to place a “session timeout disable” flag, as emphasized in my quickie demo mock-up here:

It’s that simple — and that logical. This checkbox (which defaults to timeouts enabled) only applies to the current session. The usual ten minute timeout is automatically restored for the next session.

– – –

Addendum: As suggested in the first comment on this post today, it would be reasonable to also provide this same timeout disable option in the actual ten minute timeout popup dialogue boxes themselves, so that even if the local user did not disable timeouts at the start of a session, they’d be able to later disable them for the remainder of the session if they so chose.

– – –

We probably should remind the local user that they’ve disabled timeouts for a session. There are logical places to do this, too.

CRD already provides a warning to local users that they’ve shared their desktop. For example, in Chrome OS (e.g. Chromebooks), a dialogue box like this is used:

An information notification is also popped up for the user with the same information in the Chrome OS notification area.

The CRD implementations for other OSes behave similarly. Windows systems provide a “currently sharing” dialogue box similar to the above, and also display a “floating” notification:

For all of these info notifications when timeouts have been disabled, it would be straightforward to add within them a text “warning” such as:

Timeouts have been disabled for this session.

If we’re feeling particularly paranoid about this — even given all of the other security elements that have already been satisfied to get the sharing connection open in the first place — we might want to add some sort of new warning box that can’t be covered, minimized, or moved by remote users, providing the same reminder that timeouts have been disabled.

That’s pretty much the whole enchilada. This paradigm should not be difficult to implement, and from security, usability, and accessibility standpoints overall it’s definitely a big plus for Chrome Remote Desktop users.

OK, Google — the ball’s back in your court!

–Lauren–

Another Google Accessibility Failure: Chrome Remote Desktop

Two of the most active Google users whom I know are in their mid-90s. They use Google Search and News. They use Gmail. I moved one of them from Windows to a Chrome OS Chromebox, and I recently showed the other how to use Google Docs as an alternative to paying Microsoft’s ransom for a full-blown version of Office on their new Windows 10 system. Frankly, they’re better versed at using their computers and Google services than some users I know who are less than half their ages.

I support these users — and many other persons that I also support informally among friends, family, and acquaintances, almost entirely via Google’s Chrome Remote Desktop (CRD) — I rarely if ever see various of these folks in person, and some live across the country from me. For the Chromebox and Chromebooks, CRD is the only viable remote sharing option that I know of. For Windows systems I consider CRD overall to be the easiest for talking a user through the initial installation over the phone, and then to use for the majority of associated purposes going forward.

In most respects, CRD is excellent. Data is encrypted, and in most circumstances the data connection is peer-to-peer without going through Google servers. In some configurations, system audio is sent along with the screen image.

But Chrome Remote Desktop also has a horrible, gaping accessibility problem — that has persisted and generated bug threads that in some instances now stretch back unresolved for years — that seriously limits its usefulness for those very users who could most benefit from its use.

And this flaw is unfortunately representative of a rapidly growing class of accessibility failures at Google — in terms of readability, user interface deficiencies, and other related problems — which have been spreading across their entire ecosystem to the dismay of myself and many other observers.

You’ll be hearing a lot more from me about Google accessibility problems — which in some cases may rise to the level of actual discrimination (though I don’t believe that discrimination is Google’s actual goal by any means) —  across their various products and services, but let’s get back to Chrome Remote Desktop for now.

I’ve written about this particular aspect of CRD before, but after spending a couple of hours yesterday battling it with a nonagenarian Google user, it’s time to revisit the topic.

CRD uses a completely reasonable system of credentials matching and/or access/PIN codes to provide session setup authentications. But CRD also imposes a “faux” security feature that ends up driving users crazy, making CRD much more difficult to use than it should be in many cases, and can actually result in reduced security as users seek workarounds.

There are two basic authentication mechanisms available for CRD. For non-Chrome OS versions, there is a choice between one-time access codes and “persistent” login credentials (the latter is really designed primarily for remote access to your own computers). For Chrome OS installations of CRD, only one-time access codes can be used as far as I know.

For most typical remote support situations, the one-time “share” access codes are the appropriate choice, since sharing of actual Google login credentials should be avoided of course (that’s not to say that this doesn’t sometimes become necessary when trying to help some users, but that’s a matter for a separate discussion).

And herein is the problem that has generated long threads of complaints — as I noted above some going back for years now — on the Chromium support forums, with users pleading for help and Google doing essentially nothing but occasionally popping in to make frankly lame excuses for the current situation.

Unlike in the shared credentials persistent connection model, when using the conventional one-time share codes in the typical manner to use CRD, the remote user is prompted every 10 minutes or so and must quickly respond to avoid having the connection terminated. There is no way to change this timeout. There is no way to disable it. And when trying to support a panicky or confused user remotely — irrespective of their age — this situation isn’t security — it’s a godawful mess.

Often users just want you to fix up their system remotely while they go do other things. Sitting there or running back every 10 minutes to refresh the timeout is a hassle for them at best, and can be extremely confusing as the associated prompts keep interrupting attempts to repair problems and/or explain to the remote user the details of their problems.

For some of the users even seeing those prompts is difficult, and as frustration grows over the continuing interruptions they often just want to give up or keep begging you to do something to stop the interruptions themselves — which is impossible, without asking them to share their credentials with you so that you can use persistent mode instead where that’s available.

And sometimes that’s what you end up having to do — violate proper security protocols by having them give you their Google credentials and switching to persistent mode where the interruptions won’t occur (keeping in mind that this apparently isn’t even an available option for Chrome OS — you’re stuck with the interruptions with no way out).

The upshot is clear — a feature that Google apparently thought was a security plus, easily becomes a massive security minus.

Now here’s the really sad part. From a technical standpoint, fixing this should be straightforward. The bug discussion threads — which as I’ve said have been largely ignored by Google — are replete with suggestions about this (including by yours truly).

If the view is that having a default connection timeout is security positive, at the very least provide a means for users to disable and/or change the duration of that timeout as they see fit, either permanently or on a per-connection basis. This could be implemented in a manner so that this could only be changed by the local user, not by the remote user.

But in the name of bits, bytes, and beer, don’t keep forcing everyone trying to use Chrome Remote Desktop to fight a wired-in 10 minute timeout that is incredibly disruptive to many users and in many support situations, and that drives users to using workaround methods that are detrimental to security!

Over on those bug threads, there’s considerable speculation about why Google refuses to fix this situation. The most likely reason in my opinion is that the Googlers in charge of CRD (which may not be a particularly desirable assignment) either don’t understand or just don’t care about the kinds of users for whom this situation is so disruptive. Perhaps they’ve never supported non-techie users, or older users, or users with special needs.

And unfortunately, we can sense this view across a wide sweep of Google products. A particular, young demographic appears to be the user group of overwhelming interest to Google, with everyone else increasingly left twisting slowly in the wind.

I view this as neglect, not actual malice — though the end result is much the same in either case.

Strictly from a dollars and cents standpoint, concentrating on your most desirable users can be viewed as at least being rather coldly logical, but users not in that demographic are just as dependent on Google as anyone else, and at Google scale represent vast numbers of actual human beings.

As I’ve said before, I fear that if Google does not seriously move now to solve their expanding accessibility problems (alternative user interfaces are one possible positive way forward) — in terms of readability, user interface designs, and other areas such as we see here with Chrome Remote Desktop — governments and courts are going to start moving in and dictating these aspects of Google operations.

Personally, I believe that this outcome would likely be a disaster both for Google users and Google itself. Government micromanagement of Google via the U.S. Americans with Disabilities Act — or heavy-handed directives from E.U. bureaucrats — are the last things that we need.

Yet this seems to be the direction in which we’re heading if Google doesn’t voluntary get off their proverbial butt and start seriously paying attention to these accessibility problems and the affected Google users.

I don’t doubt for an instant that Google can accomplish this if they choose to do so. I know of no firm on the planet with employees who are more skilled and ethical than Googlers.

But the clock is ticking.

–Lauren–

Comparing the Readability of Two Google Blogs

Let’s compare the readability of two Google blogs. On the right, a recent item from Google’s main blog, which has converted to Google’s new low readability design. On the left, a recent entry from the Google Security Blog, which is currently still using the traditional high-readability design.

The differences are obvious, and the low contrast on the right is especially bad for persons with aging vision (this degrading of vision typically begins around age 18, by the way). Both samples are at the same (default) zoom level.

Google is failing at accessibility in major ways, and this is just one example. For more discussion, please see: “Does Google Hate Old People?” – https://lauren.vortex.com/2017/02/06/does-google-hate-old-people and “How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)” – https://lauren.vortex.com/2017/06/26/how-google-risks-court-actions-under-the-ada-americans-with-disabilities-act.

–Lauren–

Bizarre Happenings with Hate Speech Video Reddit User Reshared by Trump

A bizarre sequence of events has unfolded involving the Reddit user who apparently created the Trump-WWE-CNN video that encouraged violence against reporters. Trump shared this sickening video — then Trump and the White House defended it. But it turned out that the Reddit user was then found to have also posted a wide variety of racist, antisemitic, and other hate speech materials.

Then it gets even stranger.

The user apparently posted a lengthy apology for his trolling and other despicable actions, which was noted in various media today, e.g.:

http://www.businessinsider.com/donald-trump-reddit-hanassholesolo-racist-anti-semitic-cnn-meme-tweet-2017-7

But this apology was quickly attacked by other Reddit users who didn’t want him to apologize. The apology post was then apparently deleted (by the user?) and the Reddit thread was locked (by Reddit moderators, the user asserted). That user then claimed that he was going to repost the original apology:

https://www.reddit.com/r/The_Donald/comments/6l9mx8/posting_my_apology_again_mods_locked_it_and_dont/

But now, the user has apparently deleted their Reddit account:

https://www.reddit.com/user/HanAssholeSolo

On the surface it appeared to be a pretty decent apology, far more than we’d ever expect from vile Donald Trump himself.

What’s really going on with this saga now? Who knows?

Stay tuned.

–Lauren–

Collecting Data on Users Suspended or Banned/Terminated by Twitter

Have you ever had your Twitter account suspended or banned/terminated, either temporarily or permanently? If so I’d appreciate hearing from you, to better understand how equitably Twitter enforces its own Terms of Service.

Please submit your relevant Twitter experiences via the form at:

https://vortex.com/twitter-issues

Information submitted there will only be made public for related reports in aggregate form with other submissions and/or anonymously, unless you indicate that you are willing to be identified publicly.

Thanks very much for your assistance with this effort!

–Lauren–

Privacy Pinheads: The Staggering Stupidity of Trump’s Voter Commission

UPDATE (1 July 2017): Trump Voting Commission vice chairman Kobach — who himself was fined $1000 by a judge about a week ago for misleading a court on a voting-related matter — is now reportedly claiming that data sent to the commission (the email address option provided for that purpose apparently doesn’t even currently use basic STARTTLS email encryption!), will be stored on a “secure” server and won’t be made public. This assertion directly contradicts the letter sent to states, which specifically says that the data will be made public! As for a “secure” federal server … give me a break! That data will be in the hands of Russia and China, and up for sale on the Darknet for identity fraud, faster than you can say “Trump University.”

– – –

Across the political spectrum, states are refusing to cooperate with the voter information request from Trump’s White House Voter Commission. As of yesterday, at least 25 states — including one that’s the home state of a commission member — are refusing the request in whole or part. 

Trump is upset. “What are they trying to hide?” he’s ranting. And for once in his damned life he’s right — but not for the reasons his micro-brain postulates. It’s actually not at all about Trump’s voter fraud fantasies, it’s all about basic privacy.

These states are indeed trying hide something — they’re trying to hide the private information of their citizens from the massive privacy abuses that would occur if that data were turned over to the commission!

I’ve been running my PRIVACY Forum mailing list — https://lists.vortex.com/mailman/listinfo/privacy — here on the Internet continuously for a quarter century. In that time, I’ve seen a wide range of privacy issues and problems — from the relatively trivial to the mind-blowingly disastrous. 

But (to paraphrase the great composer and playwright Meredith Willson), I’ve never seen anything in terms of sheer bang beat, bell ringing, big haul, great go, neck or nothin’, rip roarin’ stupidity in the privacy realm that rises to the level of the Trump commission data request. 

Let’s see what they asked for from all 50 states (and to be delivered within 16 days, by the way):

  • Full first and last names of all registrants, middle names or initials
  • Addresses
  • Dates of birth
  • Political party
  • Last four digits of social security number
  • Voter history (elections voted in) from 2006 onward
  • Active/inactive status or cancelled status
  • Information regarding any felony convictions
  • Information regarding voter registration in another state
  • Information regarding military status
  • Overseas citizen information.

And they note:

Please be aware that any documents that are submitted to the full Commission will also be made available to the public.

Bozo’s nose is flashing red! The privacy abuse meter just pinned over against the right-hand peg in the danger zone! The self-destruct announcement lady has started her countdown!

The commission’s request is insanity. And their offhand mentioning that the data will be made public (perhaps to encourage “vigilante” actions using that data?) takes that insanity and accelerates it to warp speed.

It’s truly mind-boggling. Much of that data is exactly the sorts of information that are primary fodder for privacy abuses. How often are you asked for your date of birth or last four digits of your SSN to identify yourself? Yeah, one hell of a lot!

And contrary to what the supporters of this outrageous data request are now asserting, much of that data is not public in the first place, and has specific usage and distribution restrictions placed on it by state laws when it is made available. Making that data openly available in the manner that the commission describes would in many cases be a direct violation of law. Lock them up!

For example, here in California, Title 2, Division 7, Article 1 section 19005 of the California Administrative Code specifies that:

No person who obtains registration information from a source agency shall make any such information available under any terms, in any format, or for any purpose, to any person without receiving prior written authorization from the source agency. The source agency shall issue such authorization only after the person to receive such information has executed the written agreement set forth in Section 19008.

And the code further specifies the specific ways that data obtained under this section can and cannot be used, which obviously could not be enforced under the commission’s public data dump paradigm.

The manners in which this kind of data could be abused — both by the federal government and by anyone else who gained unrestricted access to this trove after the commission made it public — would be immense. Not only are the individual information elements subject to abuse, but the ways in which this data could be combined with other personal data from other sources creates a privacy nightmare deluxe. 

If a private firm proposed to handle personal data this way, they’d be crucified.

There are of course many reasons to suspect — and various states have been saying this in no uncertain terms — that the real purpose of Trump’s commission is to devise new mechanisms for the GOP to deploy for voter suppression. I agree with this analysis.

But leaving that aside — purely from a privacy abuse standpoint the commission’s data request is beyond stupid, beyond inane, beyond dangerous — but indeed what we might have expected from a commission working for this particular Commander-in-Chump.

The states are right to push back hard against the commission’s utterly intolerable data request. And the mere fact that such an inept, idiotic, and privacy busting request was made in the first place is yet another proof that Trump’s Voter Commission is just another inept Donald Trump fraud.

–Lauren–

How Governments Are Screwing Us by Censoring Google

Today the Canadian Supreme Court ordered Google to remove search results that the Court doesn’t feel should be present. The court demands that Google remove those results not just for Canadian users, but for the entire planet. That’s right, Canada has declared itself a global Google censor.

I’ve been predicting for many years this move toward global censorship imposed by domestic governments. I suspected all along that attempts by Google to mollify government censorship demands through the use of geoblocking would never satisfy countries that have the sweet taste of censorship already in their authoritarian mouths — no matter if they’re ostensibly democracies or not. Censorship is like an addictive drug to governments — once they get the nose of the censorship camel under the tent, the whole camel will almost always follow in short order.

The EU has been pushing in the global censorship direction for ages with their awful “Right To Be Forgotten.” Countries like France, China, and Russia have been even more explicit regarding their desires for worldwide censorship powers. And frankly, it’s likely that nearly every nation will begin making the same sorts of demands once the snowball is really rolling — even here in the USA if politicians and courts can devise practical end runs around the First Amendment.

The ramifications are utterly clear. It’s a horrific race to the lowest common denominator bottom of censorship, with ever escalating demands for global removal of materials that any given government finds objectionable or simply inconvenient to the current president, or prime minister, or king, or whomever.

Ultimately, the end result is likely to be vast numbers of Google Searches that return nothing but blank white pages no matter where in the world that you reside.

My dream solution to such global censorship demands would be cutting off those countries from associated Google services. With enough righteous indignation, perhaps we could get Facebook, Twitter, and other major platforms to join the club.

I tend to doubt that these firms would have too much to worry about from a financial standpoint in this regard. The perhaps billions of users suddenly cut off from Google Search and their daily fixes of social media are unlikely to tolerate the situation for very long.

Short of this approach, there are other possible ways to fight back against global censorship. Feel free to ask me about them.

I’ve actually gone into much more detail about all of this in those many past posts that I alluded to above, and I’m not going to try dig out the numerous links for them here. Stuff my name into the Google Search bar along with terms like “censorship” or “right to be forgotten” and you’ll get a plethora of relevant results.

That is, until some government orders those search results to be removed globally from Google.

Be seeing you. I hope.

–Lauren–

Massive Fine Against Google: The EU’s Hypocrisies Exposed

The best phrase that immediately comes to mind regarding the European Union’s newly announced $2.7 billion fine against Google is “A giant load of bull.” Google is far from perfect, but the EU has a long history of specious claims against Google, and this is yet another glaring example.

EU politicians and bureaucrats — among the most protectionist and hypocritical on the planet — see Google as a giant piggy bank, an unlimited ATM machine. The EU wants the easy money, rather than admitting that so many of their own business models are stuck in the 20th (or in some cases the 19th!) century.

The EU is demanding “search equality” — but there’s nothing wrong with Google’s search result rankings, which exist to best serve Google users, not the EU government’s self-serving agenda.

And that’s the key: Where are all the ordinary Google users complaining about Google’s shopping search results rankings? You can’t find those users, because anyone who prefers using non-Google sites is absolutely able to do so at any time. Google services rank so highly in search because users prefer them. Yep, free choice!

The European Union in its typical way is treating the citizens of its member countries like children, who it feels are so ignorant that Big Mommy EU has to dictate how they use the Internet. Disgusting.

I find myself increasingly thinking that we may have more to fear from EU control of the Net than we would from even Russia or China. At least the leaders of those latter two countries are pretty upfront about their attitudes toward the Internet, however totalitarian they might be.

But the EU has its own authoritarian, “information control” mindset as well, in their case painted over with a thin and rotting veneer of faked liberalism.

When actions are taken against Google like what has happened today, the EU’s mask of respectability slips off and shatters onto the ground into a million shiny shards, revealing the EU’s true face — leering with envy and avarice for the entire world to see.

–Lauren–

How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)

Earlier today over on Google+ I posted another (relatively minor) example of Google’s horrible low contrast user interfaces (YouTube image at the bottom of this post — how do you find the “How do I find it?” link?) and I suggested that this continuing behavior by Google could be seen as a form of discrimination against persons with less than perfect vision. (Please see: “Does Google Hate Old People?”: https://lauren.vortex.com/2017/02/06/does-google-hate-old-people — for one of my earlier more detailed discussions. Also “Google and Older Users”: https://lauren.vortex.com/2017/03/14/google-and-older-users — where I discuss the need for a dedicated Google employee to focus on this area.)

Every damned time I write about this topic, my inbox starts to fill with new horror stories related to issues with Google user interfaces in these contexts, that do in some cases seem to cross the threshold into discrimination, at least in an ethical sense if not a legal one.

And I certainly get plenty of people who contact me and bring up the ADA (Americans with Disabilities Act) as relates to Google.

Thankfully, I’m not a lawyer. But readers who are lawyers have not infrequently asked me regarding any interest that I might have in participating in a class action lawsuit related to Google regarding “discriminatory” user interface and related issues.

My response has always been negative. I much prefer to keep courts out of largely technical policy matters — the thought of them trying to micromanage user interfaces makes me rather nauseous.

Yet the probability of some group moving ahead with legal action in these regards seems to be increasing dramatically as Google’s user interfaces overall — plus documents, blogs, and various other display aspects — keep getting worse in terms of the disadvantaged categories of users. Nor is the fact that most Google users are not paying for Google services necessarily a useful defense — Google has become integral to the lives of much of this planet’s population.

My premise has been that Google doesn’t actually hate older users (or other users negatively affected by these issues). Not hate them per se, anyway.

However, I’m forced to agree that Google’s attitude can certainly be interpreted by many observers as a form of hate, even if characterized by a form of neglect rather than direct action.

It has long seemed the case that Google concentrates on users in Google’s perceived key user demographics, putting much less care into users who fall outside of that focus — even though the latter represents vast (and rapidly increasing) numbers of users.

Nor do I sense that this is a problem with “rank-and-file” Googlers — many of whom I know and who are great and caring people. Rather, it seems to me that the problematic attitudes in these respects are generally sourced at Google’s executive and in some cases program manager levels, who of course set the ground rules for all Google products and services.

Either way, Google’s growing vulnerabilities to legal actions related to these situations seems clear, as these problems continue spreading across the Google universe.

While it could certainly be argued that more easily readable and usable user interfaces and reference pages would benefit all users, Google need not necessarily abandon their new “standard” interfaces with their low contrast fonts in order to solve these problems. I’ve in the past suggested the possibility of a high-readability, easier use “accessibility” interface that would exist as a user selectable option alongside the standard one. And I’ve proposed consideration of interface “APIs” that would permit third parties to write specialized interfaces to help specific groups of Google users.

None of these concepts have apparently seen any traction though, and Google seems to be barreling ahead with changes that are only making matters worse for these user groups who are already being driven bats by various aspects of Google’s design choices.

I would enormously prefer that Google take the ethical stance and move forward toward solving these problems itself. Yes, this requires nontrivial resources — but Google does have the capabilities. What they seem to be lacking right now is the will to do the right thing in these regards.

If this continues to be the case, the odds are that the courts will indeed ultimately move in. And that’s an outcome that I’m unconvinced will be a good one for either Google or its users.

–Lauren–

Google’s Gmail Will No Longer Scan Messages to Personalize Ads (but This Was Always Harmless)

Google has announced that beginning later this year, they will no longer scan or otherwise use messages in their free Gmail system for ad personalization purposes (this is already the case for their paid Gmail (G Suite) product.

This is a good decision to help undercut the Google haters’ false propaganda, but let’s be clear — this Gmail message scanning was always utterly harmless.

The controversies about Gmail scanning were ginned up by greedy lawyers and Google adversaries, with Microsoft’s lying and widely discredited (and now discontinued) “Scroogled” anti-Google propaganda campaign playing a significant “fake news” disinformation role (well before the term “fake news” became popular).

In fact, Gmail scanning has been closely akin to scanning for viruses and spam in messages. No humans were ever actually “reading” Gmail messages for ad personalization purposes, and the scanning that has occurred has been solely to find keywords that would help show relevant ads to any given user. 

Advertisers have never had access to this data — their ads are shown by Google without personal information being made available to those advertisers at all. One of the continuing “big lies” that Google haters propagate is the claim that Google sells their users’ personal information to third parties. They don’t. But a lack of understanding by many Google users of how Google’s ad systems actually work (Google could indeed be better at explaining this clearly) helps to feed such dramatic and completely false notions. 

The bottom line is that Gmail scanning has never posed a privacy risk, but since entirely stopping Gmail scanning puts a final nail in the coffin of these fake abuse claims, it’s an excellent move by Google. Good work.

–Lauren–

By Killing Encryption, Our Leaders Are Delivering Us to the Terrorists

The phrase “Like a lamb to slaughter” originates from biblical times. And it when comes to the rising chorus of politicians demanding an end to public availability of strong, end-to-end encryption, it’s we law-abiding citizens who are the lambs about to have our throats cut — by our own leaders.

Every time there’s a terrorist attack, politicians around the world (including here in the USA) are back in front of the cameras demanding government access to our private encrypted communications.

Make no mistake about it, these leaders might as well be on the payroll of the terrorists and other criminal organizations, because such demands if implemented would sell us all down the river, and make our lives vastly more dangerous.

We are far, far more at risk from these politicians wrecking our communications security than we are from terrorists and other criminals themselves in the absence of such weakened technology.

Our lives are increasingly utterly dependent on the security of computer-based communications systems, and this is true even for persons who’ve never touched a computer keyboard or a smartphone.

Our financial and so many other aspects of our personal lives are intertwined with the security and sanctity of strong encryption, and for persons living under the thumb of repressive regimes, their mortal lives themselves hang in the balance when communications security becomes compromised.

Let’s be utterly clear about this. When you’re told that it’s possible to give governments access to our secure communications without fatally weakening the underlying encryption systems, you are being told a lie, plain and simple.

The very act of building a “backdoor” into these systems fundamentally weakens them, putting honest citizens at enormous risk not only for government abuses and mistakes, but also for attacks by black-hat hackers, terrorists, and other criminals of all sorts who will find ways to exploit these government-mandated flaws.

Meanwhile, terrorists and other criminals won’t sit back and use these horrifically compromised communications systems. They’ll move to existing and under development strong end-to-end encryption systems without backdoors — independent apps that are impossible for governments to effectively control.

Government demands for backdoor access to encryption are a disaster for everyone but the evil forces that these politicians claim will be destroyed.

If one assumes for the sake of the argument that our leaders aren’t actually in league with such heinous entities, one is also forced to assume that either these politicians are getting terrible technical advice — or most likely of all — are simply ignoring the known facts in furtherance of pandering and sowing fear for political gains, regardless of the negative consequences on all of us.

Of course, even though governments might try to ban such use, in practice it would likely prove extremely difficult to stop honest, law-abiding citizens from using independent, non-backdoored strong crypto apps themselves — just like evil is sure to do.

Governments don’t like to contemplate honest persons taking such independent steps to control their own destinies. Politicians by and large prefer to think of us like those sheep.

Whether or not our leaders are accurate in such a characterization, is ultimately our decision, not theirs.

–Lauren–

YouTube’s Excellent New Moves Against Hate Speech — But There’s More Work for Google to Do

In my March blog posts — “How YouTube’s User Interface Helps Perpetuate Hate Speech” (https://lauren.vortex.com/2017/03/26/how-youtubes-user-interface-helps-perpetuate-hate-speech), and  “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), I was quite critical of how Google is handling certain aspects of their own Terms of Service enforcement on YouTube.

In “Four steps we’re taking today to fight online terror” (https://blog.google/topics/google-europe/four-steps-were-taking-today-fight-online-terror/), Google’s General Counsel Kent Walker (a straight-arrow guy whom it’s been my pleasure to meet) announced YouTube changes aimed at dealing more effectively with extremist videos and hate speech more broadly.

Key aspects of these changes appear to be in line with my public suggestions — in particular, faster takedowns for extremist content, and disqualification of hate speech videos from monetization and “suggested video” systems, are excellent steps forward.

I would prefer that hate speech videos not only be demonetized and “hidden” from suggestions — but that they’d be removed from the YouTube platform entirely. I am not at this point fully convinced that sweeping that kind of rot “under the carpet” — where it can continue to fester — is a practical long-term solution. However, we shall see. I will be watching with interest to determine how these policies play out in practice.

As I’ve noted in earlier posts, I also feel strongly that Google needs to make it more “in your face” obvious to YouTube users that they can report offending videos. I had previously mentioned that the YouTube “Report” link — that years ago was on the top-level YouTube user interface — seemed to have returned to that position (at least for YouTube Red subscribers) after a long period being buried under the top level “More” link. At the time, I speculated that this might only be an ephemeral user-facing experiment, and in fact for me at least the “Report” link is again hiding under the “More” link.

I’ve discussed this problem before. Users who might otherwise report an offending video are much less likely to do so if a “Report” link isn’t obvious. I acknowledge that one possible reason for “hiding” the “Report” link is concerns about false positives. Indeed, in Kent’s post today, he mentions the high accuracy of YouTube “Trusted Flaggers” — which suggests that my speculation in this regard (about reports from users overall) was likely correct. In any case, I still feel that a top-level user interface “Report” link is a very important element for consideration.

While I do feel that there’s more that Google needs to do in various of these regards concerning extremist and hate speech, I am indeed cognizant of their understandable desire to move in carefully calibrated steps given the impact of any changes at Google scale. And yeah, I’m indeed pleased to see Google moving these issues in the overall direction that I’ve been publicly urging.

My kudos to the associated Google/YouTube teams — and we’ll all be watching to see how these changes play out in the fullness of time.

Be seeing you.

–Lauren–

Why I May Remove All Google+ Buttons from My Blog Posts

Google says they will no longer show the +1 count on external G+ buttons — like I have on all of my blog postings. Without the +1 count, these buttons are largely useless, and I will probably remove all G+ buttons from my posts to recover that space, and urge other sites to do the same. I’m sorry, Google, this one is extremely boneheaded.

I’ll bet I know why they’re doing it — Google is probably embarrassed by the relatively low counts vis-a-vis Facebook. But I support G+ and not Facebook because I consider G+ to be a superior platform, and this decision by Google is just inane.

–Lauren–

Brief Thoughts on a Google Ombudsman and User Trust

This post in PDF format:
https://vortex.com/google-ombudsman-2017-06-12.pdf

– – –

Despite significant strides toward improved public communications over the years, Google is still widely viewed — both by users and by the global community at large — as an unusually opaque organization.

Google does provide a relatively high level of communications — including customer support — for users of their paid services. And of course, there’s nothing inherently unreasonable with Google providing different support levels to paying customers as compared to users of their many free services.

But without a doubt, far and away, Google-related issues that users bring to me most frequently still relate to those users’ perceived inabilities to effectively communicate with Google when they have problems with Google services (usually free but frequently paid), and these are services that vast numbers of persons around the world now depend upon for an array of crucial aspects in their businesses and personal lives. These problems can range from minor to quite serious, sometimes with significant ongoing impacts.

Similarly and related, user and community confusion over both the broad and detailed aspects of various Google policies remains widespread, in some cases not significantly improved over many years.

The false assumption that Google sells user data to third parties remains rampant, fueled both by basic misunderstandings of Google’s ad technologies, and by Google competitors and haters — who leverage Google’s seemingly institutional public communications reluctance — filling the resulting vacuum with misinformation and false propaganda. Another of many examples is the continuing unwillingness of many users to provide account recovery and/or two-factor verification phone numbers to Google, based on the unfounded fear of those numbers being sold or used for other purposes. Confusion and concerns related to YouTube policies are extremely widespread. And the list goes on …

While Google’s explanatory documents have significantly improved over time, they often are still written at technical levels beyond the understanding of major subsets of users.

Significant and growing segments of the Google user population — including older and other special needs users who increasingly depend on Google services — feel left behind by key aspects of Google’s user interfaces — with visual designs (e.g. perceived low contrast layouts), hidden interface elements, and other primary usability aspects of growing concerns and angst.

These and other associated factors serve to undermine user trust in Google generally, with significant negative regulatory and political ramifications for Google itself, not only in the USA but around the world. This is all exacerbated by Google’s apparently deeply ingrained fear of “Streisand Effect” reactions to public statements.

It has frequently been noted that many organizations employ an “ombudsman” (or multiple persons fulfilling similar roles under this or other titles) to act as a form of broad, cross-team interface between individual corporate and/or team concerns and the concerns of the user community, typically in the contexts of products, services, and policy issues.

Google has apparently been resistant to this concept, with scalability concerns likely one key factor.

However, this perceived reaction may suggest a lack of familiarity with the wide range of ways in which ombudsman roles (or similar roles otherwise titled) may be tailored for different organizations, toward the goal of more effective communications overall.

An ombudsman is not necessarily a form of “customer support” per se. An employee filling an ombudsman role need not have any authority over decisions made by any teams, and may not even interact with users directly in many circumstances.

The ombudsman may be tasked primarily with internal, not external communications, in that they work to help internal teams understand the needs of users both in terms of those individual teams and broader cross-team scopes. In this context, their contribution to improved, effective public communications and perceptions of a firm are more indirect, but can still be of crucial importance, by helping to provide insights regarding user interactions, broader policies, and other issues that are not limited to individual teams’ everyday operational mandates.

An ombudsman can help teams to better understand how their decisions and actions are affecting users and the overall firm. The ombudsman may be dealing with classes and categories of user issues, rather than with individual users, yet the ombudsman is still acting as a form of liaison between users, individual teams, and the firm as a whole.

There are of course myriad other ways to structure such roles, including paradigms that combine internal and public-facing responsibilities. But the foundational principle is the presence of a person or persons whose primary responsibilities are to broadly understand the goals and dynamics of teams across Google, the scope of user community issues and concerns, and to assist those teams and Google management to better understand the associated interdependent dynamics in terms of the associated problems and practical solutions — and then help to deploy those solutions as appropriate.

Google’s users, the community at large, and Google itself would likely significantly benefit.

–Lauren–

Google Users Who Want to Use 2-Factor Protections — But Don’t Understand How

In my “Questions I’m Asked About Google” #1 live video stream (https://vortex.com/google-1) a few days ago, I emphasized the importance of protecting Google Accounts with Google’s excellent 2-factor authentication system.

In response I’ve received a bunch of queries from Google users who do not understand how to set this up or use it, even though they very much want to.

These concerns fall into a number of categories. Even though I explained that it’s safe to give your phone number to Google — Google won’t abuse it — many users are still resistant, and note that they do not see a way to activate Google 2-factor protection for other authentication mechanisms (e.g. Google Authenticator App and/or Backup Codes) without first providing their phone number information.

Others want to use their existing (non-Google) mail programs after activating Google 2-factor, but are utterly confused by Google’s “application-specific passwords” system that is required to do so.

When you’re trying to get users to take advantage of the best possible security, and have successfully convinced them that this is a good idea, but your documentation is still written in a way that many non-techie users dependent on your services cannot readily understand — you have a serious problem.

Despite positive strides at Google in terms of help center and other documentation resources, Google is still leaving vast numbers of their users behind.

Google can do better.

–Lauren–