Stupid Story Claiming Google Tracking — Plus the USA Healthcare Nightmare

I’ve been receiving many queries as to why this blog and my lists have been so quiet lately. I would have preferred to say nothing about this, but I don’t want anyone concerned that they’ve been dropped off a list or are otherwise being subjected to technical issues. The lists are OK, the servers are running for now, and there’s nothing associated amiss at your end.

The executive summary of what’s going on is that I’m not well — I’ll spare you the details — and it’s unclear what I can do about it, given the dismal, insane state of health insurance in this country, especially for persons like me who have to deal with the collapsed individual medical insurance market (that is, who don’t have an employer, and so don’t have employer provided medical insurance).

The GOP and Satan’s Sociopath in the Oval Office are working deliberately to destroy health insurance and ruin lives for the sake of enriching their uber-wealthy, vile brethren. But even without those deliberate efforts at sabotage, the healthcare system itself has already utterly collapsed for vast numbers of people without steady incomes and who are too young or don’t qualify for Medicare — which the GOP is also working to decimate. The holes in Obamacare/ACA are big enough to toss the moon through, creating horrific “Catch-22” nightmares for persons with very low income levels and who cannot reasonably see into the future to predict their next year’s income.

The upshot of all this is that I simply cannot physically keep up under these conditions, and these public venues will be very quiet until such a time, if ever, that the overall situation changes. Sorry about that, Chief.

Since I was sending this item out anyway, I wanted to mention one rather crazy tech story going around currently. Obviously there’s been any number of technology issues recently about which I’d ordinarily have said something — most of them depressing as usual.

But there’s one in the news now about Google that is just so stupid that it can make your head explode, a “Google is secretly tracking your phone” scare piece. 

And as usual, Google isn’t addressing it in ways that ordinary people can understand, so it’s continuing to spread, the haters are latching on, and folks have started calling me asking about it in panic. 

Sometimes I think that Google must have a sort of suicide complex, given the way that they watch again and again how these sorts of stories get out of control without Google providing explanations beyond quotes to the trade press. Newsflash! Most ordinary non-techies don’t read the trade press!

Yeah, I know, Google just hopes that by saying as little as possible that the stories will fade away. But it’s like picking up comic strips with Silly Putty (anyone else remember doing that?) — you can keep folding the images inward but eventually the entire mass of putty is a dark mass of ink.

You’d think that with so many opportunistic regulatory and political knives out to attack Google these days, Google would want to speak to these issues clearly in language that ordinary folks could understand, so that these persons aren’t continuously co-opted by the lies of Google haters. I’ve done what I could to explain these issues in writing and on radio, but as I’ve said before this should be Google’s job — providing authoritative and plain language explanations for these issues. It’s not something that Google should be relying on outsiders to do for them willy-nilly.

The latest story is a lot of ranting and gnashing of teeth over the fact that Android phones have apparently been sending mobile phone network cell IDs to Google. Not that Google did anything with them — they’ve been tossing them and are making changes so that they don’t get sent at all. The complaint seems to be that these were sent even if users opt-ed out of Google’s location services. 

But the whole point is that the cell IDs had nothing to do with Google location geo services, but are related to the basic network infrastructures required to get notifications to the phones. It’s basically the same situation as standard mobile text messages — you need to know where the phone is connected to the network at the moment to effectively contact the phone to send the user a text message or other notifications, or even an ordinary phone call!

In a response apparently aimed at the trade press, Google talked about MCC and MNC codes and related tech lingo that all mean pretty much NOTHING to most people who are hearing this “tracking” story. 

Let me put this into plain English.

If your cell phone is turned on, the cellular networks know where you are — usually to a good degree of accuracy these days even without GPS. That’s how they work. That’s how you receive calls and text messages. It’s a core functionality that has nothing to do with Google per se.

You know all those news stories you see about crooks who get caught through tracking of their cell phones via location info that authorities get from the cellular carriers? 

Have you ever thought to yourself, “Why don’t those morons just turn off their phones when they don’t want to be tracked?”

It’s not Google that you need to be worried about. They have powerful protections for user data, and are extremely, exceptionally strict about when authorities can obtain any of it. On the other hand, the cellular carriers have traditionally been glad to hand over largely any user data that authorities might request for virtually any reason, often on a “nod and a wink” basis. You want something to worry about? Don’t worry about Google, worry about those cellular carriers.

Nor do you need to be a crook to turn off your phone when you don’t even want the carriers to know where you are. You want to use local apps? Fine, instead of turning the phone off, disable the phone’s radios by activating the “Airplane Mode” that all smartphones have available. 

This is all of the writing that I can manage right now and will probably be all that I have to say here for an indeterminate period. I can’t guarantee timely or even any responses to queries, but I’ll try to keep this machinery running the best that I can under the circumstances.

The best to you and yours for the holiday weekend and for the entire holiday season.

Please take care.

–Lauren–

How the Internet Broke the Planet

I am not an optimistic person by nature. I’ve tended — pretty much through my entire life — to always be wary of how things could go wrong. In some ways, I’ve found this to be a useful skill — when writing code it’s important to cover the range of possible outcomes and error states, and properly provide for their handling in a program or app.

Then again, I’ve never been much fun at parties. When I went to parties. Which has been very infrequently.

Mostly, I’ve spent my adult life in front of computer screens of all sorts (and before that, various forms of teletypes, other teleprinters, and even the occasional 029 keypunch machine).

I started writing publicly in the early 70s at the Internet’s ancestor ARPANET site #1 at UCLA, often on the very early mailing lists like Human-Nets, MsgGroup, or SF-Lovers (yes, and Network-Hackers, too). I even monitored the notorious Wine-Tasters list — though not being much of a drinker I uncharacteristically didn’t have much to say there.

Back then there were no domains, so originally I was LAUREN@UCLA-ATS (the first host on ARPANET) and later LAUREN@UCLA-SECURITY as well.

Much of my writing from those days is still online or has been brought back online. Looking it over now, I find that while there are minor points I might change today, overall I’m still willing to stand by everything I’ve written, even from that distant past.

My pessimism was already coming through in some of those early texts. While many in the ARPANET community were convinced that The Network would bring about the demise of nationalities and the grand rising up of a borderless global world of peace and tranquility, I worried that once governments and politicians really started paying attention to what we were doing, they’d find ways to warp it to their own personal and political advantages, perhaps using our technology for new forms of mass censorship.

And I feared that if the kind of networking tech we had created ever found its way into the broader world, evil would ultimately be more effective at leveraging its power than good would be.

Years and decades went by, as I stared at a seemingly endless array of screens and no doubt typed millions of words.

So we come to today, and I’m still sitting here in L.A. — the city where I’ve always lived — and I see how the Internet has been fundamentally broken by evil forces only some of which I foresaw years ago.

Our wonderful technology has been hijacked by liars, Nazis, pedophiles and other sexual abusing politicians, and an array of other despicable persons who could only gladden the hearts of civilization’s worst tyrants.

Our work has been turned into tools for mass spying, mass censorship, political oppression, and the spreading of hateful lies and propaganda without end.

I have never claimed to be evenhanded or dispassionate when it came to my contributions to — and observations of — the Internet and its impact on the world at large.

Indeed the Net is a wonder of civilization, on par with the great inventions like the wheel, like the printing press, like penicillin. But much as nuclear fission can be used to kill cancer or decimate cities, the Internet has proven to be a quintessential tool that can be used for both good and evil, for glories of education and communications and the availability of information, but also for the depths of theft and extortion and hate.

The dark side seems to be winning out, so I won’t pull any punches here. 

I have enormous respect for Google. I have pretty much nothing but disdain for Facebook. My feelings about Twitter are somewhere in between. It’s difficult these days to feel much emotion at all about Microsoft one way or another.

None of these firms — or the other large Internet companies — are all good or all bad. But it doesn’t take rocket science (or computer science for that matter) to perceive how Google is about making honest information available, Facebook is about controlling information and exploiting users, and Twitter doesn’t seem to really care anymore one way or another, so  long as they can keep their wheels turning.

This is obviously something of an oversimplification. Perhaps you disagree with me — sometimes, now, or always — and of course that’s OK too.

But I do want you to know that I’ve always strived to offer my honest views, and to never arbitrarily nor irrationally take sides on an issue. If the result has been that at one time or another pretty much everyone has disagreed with something I’ve said — so be it. I make no apologies for the opinions that I’ve expressed, and I’ve expected no apologies in return.

In the scheme of things, the Internet is still a child, with a lifetime to date even shorter than that of we frail individual human animals. 

The future will with time reveal whether our work in this sphere is seen as a blessing or curse — or most likely as some complex brew of both — by generations yet to come. Some of you will see that future for yourselves, many of us will not.

Such is the way of the world — not only when it comes to technology, but in terms of virtually all human endeavors.

Take care, all.

–Lauren–

Google Maps’ New Buddhist “Swastika”

I’m already getting comments — including from Buddhists — suggesting that Google Maps’ new iconography tagging Buddhist temples with the ancient symbol that is perceived by most people today as a Nazi swastika is problematic at best, and is likely to be widely misinterpreted. I agree. I’m wondering if Google consulted with the Buddhist community before making this choice. If not, now is definitely the time to do so.

–Lauren–

UPDATE (November 16, 2017): Google tells me that they are restricting use of this symbol to areas like Japan “where it is understood” and are using a different symbol for localization in most other areas. I follow this reasoning, but it’s unclear that it avoids the problems with such a widely misunderstood symbol. For example, I’ve received concerns about this from Buddhists in Japan, who fear that the symbol will be “latched onto” by haters in other areas. And indeed, I’ve already been informed of “Nazi Japan” posts from the alt-right that cite this symbol. The underlying question is whether or not such a “hot button” symbol can really be restricted by localization into not being misunderstood in other areas and causing associated problems. That’s a call for Google to make, of course.

Google’s Extremely Shortsighted and Bizarre New Restrictions on Accessibility Services

UPDATE (December 8, 2017): Google Wisely Pauses Move to Impose Accessibility Restrictions

UPDATE (November 17, 2017): Thanks Google for working with LastPass on this issue! – Google details Autofill plans in Oreo as LastPass gets reprieve from accessibility removals

 – – –

My inbox has been filling today with questions regarding Google’s new warning to Android application developers that they will no longer be able to access Android accessibility service functions in their apps, unless they can demonstrate that those functions are specifically being used to help users with “disabilities” (a term not defined by Google in the warning).

Beyond the overall vagueness when it comes to what is meant by disabilities, this entire approach by Google seems utterly wrongheaded and misguided.

My assumption is that Google wants to try limit the use of accessibility functions on the theory that some of them might represent security risks of one sort or another in specific situations. 

If that’s actually the case — and we can have that discussion separately — then of course Google should disable those functions entirely — for all apps. After all, “preferentially” exposing disabled persons to security risks doesn’t make any sense.

But more to the point, these accessibility functions are frequently employed by widely used and completely legitimate apps that use these functionalities to provide key features that are not otherwise available under various versions of Android still in widespread deployment.

Google’s approach to this situation just doesn’t make sense. 

Let’s be logical about this.

If accessibility functions are too dangerous from security or other standpoints to potentially be used in all legitimate apps — including going beyond helping disabled persons per se — then they should not be permitted in any apps.

Conversely, if accessibility functions are safe enough to use for helping disabled persons using apps, then they should be safe enough to be used in any legitimate apps for any honest purposes.

The determining factor shouldn’t be whether or not an app is using an accessibility service function within the specific definition of helping a particular class of users, but rather whether or not the app is behaving in an honest and trustworthy manner when it uses those functions.

If a well-behaved app needs to use an accessibility service to provide an important function that doesn’t directly help disabled users, so what? There’s nothing magical about the term accessibility.

Apps functioning honestly that provide useful features should be encouraged. Bad apps should be blown out of the Google Play Store. It’s that simple, and Google is unnecessarily muddying up this distinction with their new restrictions.

I encourage Google to rethink their stance on this issue.

–Lauren–

T-Mobile’s Scammy New Online Payment System


Traditionally, one of the aspects of T-Mobile that subscribers have really liked is how quickly and easily they could pay their bills online. A few seconds was usually all that was needed, and it could always be done in a security-positive manner.

No more. T-Mobile has now taken their online payment system over to the dark side, using several well-known methods to try trick subscribers into taking actions that they probably don’t really want to take in most instances.

First, their fancy new JavaScript payment window completely breaks the Chrome browser autofill functions for providing credit card data securely. All credit card data must now be entered manually on that T-Mobile payment page.

One assumes that T-Mobile site designers are smart enough to test such major changes against the major browsers, so perhaps they’re doing this deliberately. But why?

There are clues.

For example, they’ve pre-checked the box for “saving this payment method.” That’s always a terrible policy — many users explicitly avoid saving payment data on individual sites subject to individual security lapses, and prefer to save that data securely in their browsers to be entered onto sites via autofill.

But if a firm’s goal is to encourage people to accept a default of saving a payment method on the site, breaking autofill is one way to do it, since filling out all of the credit card data every time is indeed a hassle.

There’s more. After you make your payment, T-Mobile now pushes you very hard to make it a recurring autopay payment from that payment method. The “accept” box is big and bright. The option to decline is small and lonely. Yeah, they really want you to turn on autopay, even if it means tricking you into doing it.

Wait! There’s still more! If you don’t have autopay turned on, T-mobile shows an alert, warning you that a line has been “suspended” from autopay and urging you to click and turn it back on. They say this irrespective of the fact that you never had autopay turned on for that line in the first place.

No, T-Mobile hasn’t broken any laws with any of this. But it’s all scammy at best and really the sort of behavior we’d expect from AT&T or Verizon, not from T-Mobile.

And that’s the truth.

–Lauren–

Facebook’s Staggeringly Stupid and Dangerous Plan to Fight Revenge Porn

I’m old enough to have seen a lot of seriously stupid ideas involving the Internet. But no matter how incredibly asinine, shortsighted, and nonsensical any given concept may be, there’s always room for somebody to come up with something new that drives the needle even further into the red zone of utterly moronic senselessness. And the happy gang over at Facebook has now pushed that poor needle so hard that it’s bent and quivering in total despair. 

Facebook’s new plan to fight the serious scourge of revenge porn is arguably the single most stupid — and dangerous — idea relating to the Internet that has ever spewed forth from a major commercial firm. 

It’s so insanely bad that at first I couldn’t believe that it was real — I assumed it was a satire or parody of some sort. Unfortunately, it’s all too real, and the sort of stuff that triggers an urge to bash your head into the wall in utter disbelief.

The major Internet firms typically now have mechanisms in place for individuals to report revenge porn photos for takedown from postings and search results. Google for example has a carefully thought out and completely appropriate procedure that targeted parties can follow in this regard to get such photos removed from search results. 

So what’s Facebook’s new plan? They want you to send Facebook your own naked photos even before they’ve been abused by anyone — even though they might never be abused by anyone!

No, I’m not kidding. Facebook’s twisted idea is to collect your personal naked and otherwise compromising sexually-related photos ahead of time, so just in case they’re used for revenge porn later, they can be prevented from showing up on Facebook. Whether or not it’s a great idea to have photos like that around in the first place is a different topic, but note that by definition we’re talking about photos already in your possession, not secret photos surreptitiously shot by your ex — which are much more likely to be the fodder for revenge porn.

Now, you don’t need to be a security or privacy expert, or a computer scientist, to see the gaping flaws in this creepy concept. 

No matter what the purported “promises” of privacy and security for the transmission of these photos and how they’d be handled at Facebook, they would create an enormous risk to the persons sending them if anything happened to go wrong. I won’t even list the voluminous possibilities for disaster in Facebook’s approach — most of them should be painfully obvious to pretty much everyone.

Facebook appears to be trying to expand into this realm from a methodology already used against child abuse photos, where such abuse photos already in circulation are “hashed” into digital “signatures” that can be matched if new attempts are made to post them. The major search and social media firms already use this mechanism quite successfully. 

But again, that involves child images that are typically already in public circulation and have already done significant damage.

In contrast, Facebook’s new plan involves soliciting nude photos that typically have never been in public circulation at all — well, at least before being sent in to Facebook for this plan, that is. 

Yes, Facebook will put photos at risk of abuse that otherwise likely would never have been abused!

Facebook wants your naked photos on the theory that holy smokes, maybe someday those photos might be abused and isn’t it grand that Facebook will take care of them for us in advance!

Is anybody with half a brain buying their spiel so far? 

Would there be technically practical ways to send photo-related data to Facebook that would avoid the obvious pitfalls of their plan? Yep, but Facebook has already shot them down.

For example, users could hash the photos using software on their own computers, then submit only those hashes to Facebook for potential signature matching — Facebook would never have the actual photos.

Or, users could submit “censored” versions of those photos to Facebook. In fact, when individuals request that Google remove revenge porn photos, Google explicitly urges them to use photo editing tools to black out the sensitive areas of the photos, before sending them to Google as part of the removal request — an utterly rational approach.

Facebook will have none of this. Facebook says that you must send them the uncensored photos with all the goodies intact. They claim that local hashing won’t work, because they need to have humans verify the original uncensored photos before they’re “blurred” for long-term storage. And they fear that allowing individuals to hash photos locally would subject the hashing algorithms to reverse engineering and exploitation.

Yeah, Facebook has an explanation for everything, but taken as a whole it makes no difference — the entire plan is garbage from the word go.

I don’t care how trusted and angelic the human reviewers of those uncensored submitted nude photos are supposed to be or what “protections” Facebook claims would be in place for those photos. Tiny cameras capable of copying photos from internal Facebook display screens could be anywhere. If human beings at Facebook ever have access to those original photos, you can bet your life that some of those photos are eventually going to leak from Facebook one way or another. You’ll always lose your money betting against human nature in this regard.

Facebook should immediately deep-six, bury, terminate, and otherwise cancel this ridiculous plan before someone gets hurt. And next time Facebook bros, how about doing some serious thinking about the collateral risks of your grand schemes before announcing them and ending up looking like such out-of-touch fools.

–Lauren–

3D Printed Wall Mount for the Full-Sized Google Home

Since the 3D printed wall mount for my Google Home Mini worked out quite nicely (details here), I went ahead yesterday and printed a different type of wall mount for my original Google Home (which is more suited for music listening given its larger and more elaborate speaker system — it even has decent bass response.)

Performance of the Google Home when mounted on the wall seems exemplary, both in terms of audio reproduction and the performance of its integral microphones. 

The surface of the mount meshes with the contours on the bottom of the Google Home unit, providing additional stability.

At the end of this post, I’ve included photos of the printed mount itself, the mount on the wall with Google Home installed, and a very brief video excerpt of the printing process. 

The model for this mount is from “westlow” at: https://www.thingiverse.com/thing:2426589 (I used the “V2” version).

As always, if you have any questions, please let me know. 

Be seeing you.

–Lauren–

(Please click images to enlarge.)

Some Background on 3D Printing Gadgets for the Google Home Mini

UPDATE (October 30, 2017): 3D Printed Wall Mount for the Full-Sized Google Home

– – –

Over on Google+ I recently posted several short items regarding a tiny plastic mount that I 3D printed a couple of days ago to hang my new Google Home Mini on my wall (see 2nd and 3rd photos below, for the actual model file please see: https://www.thingiverse.com/thing:2576121 by “Jakewk13”).

This virtually invisible wall mount is perfectly designed for the Mini and couldn’t be simpler. Technically, the Mini is upside down when you use this mount, but of course it works just fine. Thanks Google for sending me a Mini for my ongoing experiments!

I’ve since received quite a few queries about my printing facilities, such as they are.

So the 1st photo below shows my 3D printer setup. Yes, it looks like industrial gear from one of the “SAW” torture movies, but I like it that way. This is an extremely inexpensive arrangement, where I make up for the lack of expensive features with a fair degree of careful ongoing calibration and operational skill, but it serves me pretty well. I can’t emphasize enough how critical accurate calibration is with 3D printing, and there’s a significant learning curve involved.

The basic unit started as a very cheap Chinese clone printer kit that I built and mounted on that heavy board for stability. Then, hardware guy that I’ve always been, I started modifying. As is traditional, many of the additions and modifications were themselves printed on that printer. This includes the filament reel support brackets, calibration rods, filament guide, inductive sensor mount, and more. I installed an industrial inductive sensor at the forward left of the black extruder unit, to provide more precise Z-axis homing and to enable automatically adjusted print extrusion leveling.

I replaced the original cruddy firmware with a relatively recent Repetier dev build, which also enabled the various inductive sensor functions. I had to compile out the SD card support to make room for this build in my printer controller — but I never used the SD card on the printer (intended for standalone printing) anyway.

On the build platform, I use ordinary masking tape, that gets a thin coat of glue stick immediately after I put the tape down. The tape and glue can last for quite a few prints before needing replacement.

I mainly print PLA filament. I never touch ABS — it warps, its fumes smell awful and are highly toxic.

I almost always print at an extruder temperature of 205C and a bed temperature of 55C.

The printer is driven by Repetier Server which runs on 14.04 Ubuntu via Crouton running on an older CrOS Chromebook. I typically use Linux Cura for model slicing.

I know, it’s all laughably inexpensive and not at all fancy by most people’s standards, but it does the job for me when I want to hang a Google gadget on the wall or need the odd matter-antimatter injector guide servo nozzle in a hurry.

Yep, it really is the 21st century.

–Lauren–

(Please click images to enlarge.)

Understanding Google’s New Advanced Protection Program for Google Accounts


I’ve written many times about the importance of enabling 2-factor authentication on your Google accounts (and other accounts, where available) as a basic security measure, e.g. in “Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good” — https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi — and in other posts too numerous to list here.  

Given this history, I’ve now begun getting queries from readers regarding Google’s newly announced and very important “Advanced Protection Program” (APP) for Google accounts — most queries being variations on “Should I sign up for it?”

The APP description and “getting started” page is at:

https://landing.google.com/advancedprotection/

It’s a well designed page (except for the now usual atrocious low contrast Google text font) with lots of good information about this program. It really is a significant increase in security that ordinary users can choose to activate, and yes, it’s free (except for the cost of purchasing the required physical security keys, which are available from a variety of vendors).

But back to that question. Should you actually sign up for APP?

That depends.

For the vast majority of Google users, the answer is likely no, you probably don’t actually need it, given the additional operational restrictions that it imposes.

However, especially for high-profile users who are most likely to be subjected to specifically targeted account attacks, APP is pretty much exactly what you need, and will provide you with a level of account security typically unavailable to most (if any) users at other commercial sites.

Essentially, APP takes Google’s existing 2-factor paradigm and restricts it to only its highest security components. So while USB/Bluetooth security keys are the most secure option for conventional 2-factor use on Google accounts, other 2-factor options like SMS text messages (to name just one) continue to also be available. This provides maximum flexibility for most users, and minimizes the chances of their accidentally locking themselves out of their Google accounts.

APP requires the use of these security keys — the other options are no longer available. If you lose the keys, or can’t use them for some reason, you’ll need to use a special Google account recovery procedure that could take up to several days to complete — a rigorous process to assure that it’s really you trying to regain access to the account.

There are other security-conscious restrictions to your account as well if you enable APP. For example, third-party apps’ access to your account will be significantly restricted, preventing a range of situations where users might otherwise accidentally grant overly broad permissions from outside apps to Google accounts.

It’s important to remember that there do exist situations where you are likely to not be able to use security keys. Public computers (and ironically, computers in high security environments) often have unusable USB ports and have Bluetooth locked in a disabled mode. These can be important considerations for some users.

Cutting to the chase, Google’s standard 2-factor systems are usually going to be quite good enough for most users and offer maximum flexibility — of course only if you enable them — which, yeah, you really should have done by now!

But in special cases for particularly high-profile or otherwise vulnerable Google users, the Advanced Protection Program could be the proverbial godsend that’s exactly what you’ve been hoping for.

As always, feel free to contact me if you have any additional questions about this.

Be seeing you.

–Lauren–

Explaining the Chromebook Security Scare in Plain English: Don’t Panic!

Yesterday I pushed out to various of my venues a Google notice regarding a security vulnerability relating to a long list of Chrome OS based devices (that is, “CrOS” on Chromebooks and Chromeboxes). That notice (which is titled more like a firmware upgrade advisory than a security warning per se) is at:

https://sites.google.com/a/chromium.org/dev/chromium-os/tpm_firmware_update

While that page is generally very well written, it is still quite technical in its language. Unfortunately, while I thought it was important yesterday to disseminate it as quickly as possible, I was not in a position to write any significant additional commentary to accompany those postings at that time. 

Today my inbox is filled with concerned queries from Chromebook and Chromebox users regarding this issue, who found that Google page to be relatively opaque.

Does this bug apply to us? Should we rush to upgrade? What happens if something goes wrong? Should our school be concerned — we’ve got lots of students using Chromebooks, what should we do? Help!

Here’s the executive summary — perhaps the way that Google should have said it: DON’T PANIC! — especially if you have strong passwords. Most of you don’t really have to worry much about this one. But please do keep reading, especially and definitely if you’re a corporate user or someone else in a particularly high security environment.

This is not a large-scale attack vulnerability, where millions of devices can be easily compromised. In fact, even in worst case scenarios, the attack is computationally “expensive” — meaning that much more “targeted” attacks, e.g., against perceived “high-value” individuals, would be the focus.

Google has already take steps in their routine Chrome OS updates to mitigate some aspects of this problem and to make it an even less practical attack from the standpoint of most individual users, though the vulnerability cannot be completely closed via this approach for everyone.

The underlying problem is a flaw in the firmware (the programming) of a specific chip in these devices, called a TPM. Google didn’t expand that acronym in their notice, so I will — it stands for Trusted Platform Module.

The TPM is a crucial part of the cryptographic system that protects the data on Chrome OS devices. It’s sort of the “roach motel” of security chips — certain important crypto key data gets in there but can’t get out (yet can still be utilized appropriately by the system).

The TPM firmware flaw in question makes the possibility of “brute force” guessing of internal crypto keys more practical in a targeted sense, but again, not at large scale. And in fact, if you have a weak password, that’s a far greater vulnerability for most users than this TPM bug ever would have been. Google’s mitigations of this problem already provide good protection for most individual users with strong passwords.

C’mon, switch to a strong password already! You’ll sleep better.

It’s really in high security corporate environments and similar situations where the TPM flaw is of more concern, particularly where individual users may be reasonably expected to be targets of security attacks.

Where firms or other organizations are using their own crypto certificates via the TPM to allow corporate or other access (or use “Verified Access” for enterprise-managed authentication) the TPM bug is definitely worthy of quite serious consideration at least.

Ordinary users can upgrade their TPM firmware if they wish (in enterprise-managed environments, you will likely need administrative permission to perform this). The procedure uses the “powerwash” function of the devices, as explained on the Google page.

But as also noted there, this is not a risk-free procedure. Powerwash wipes all user data from the device, and devices can fail to boot if things go wrong during the process. There are usually ways to recover even from that eventuality, but you probably don’t want to be in that position if you can reasonably avoid it.

For the record, I am personally not upgrading the TPM firmware on the Chrome OS devices that I use or manage at this time. They all have decent passwords, and especially for remote users I won’t risk the powerwash sequence for now.

I am of course monitoring the situation and will re-evaluate as necessary. Google is working on a way to update the TPM firmware without a powerwash — if that comes to pass it will significantly change the equation. And of course if I had to use any of these devices in an environment where TPM-based crypto certificates were required, I’d consider a powerwash for TPM firmware upgrade to be a mandatory prerequisite.

In the meantime, be aware of the situation, think about it, but once again, don’t panic!

–Lauren–

Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–

Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home


UPDATE (October 17, 2017): Google Home, nearly a year after its initial release, finally has a real sleep timer! Some readers have speculated that this popular post that you’re viewing right here somehow “shamed” Google into final action on this. I wouldn’t go that far. But I’ll admit that it’s somewhat difficult to stop chuckling a bit right now. In any case, thanks to the Home team!

– – –

I’ve long been bitching about Google Home’s lack of a basic function that clock radios have had since at least the middle of the last century — the classic “sleep timer” for playing music until a specified time or until a specific interval has passed. I suspect my rants about this have become something of a chuckling point around Google by now.

Originally, sleep timer type commands weren’t recognized at all by GH, but eventually it started admitting that the concept at least exists.

A somewhat inconvenient but seemingly serviceable way to fake a sleep timer is now possible with Google Home. I plead guilty, it’s a hack. But here we go.

Officially, GH still responds with “Sleep timer is not yet supported” when you give commands like “Stop playing in an hour.”

BUT, a new “Night Mode” has appeared in GH firmware, at least since revision 99351 (I’m in the preview program, you may or may not have that revision yet, or it may have appeared earlier in some cases).

This new mode — in the device settings reachable through the Home app — permits you to specify a maximum volume level during specified days and hours. While the description doesn’t say this explicitly, it turns out that this affects music streams as well as announcements (except for alarms and timers). And, you can set the maximum volume for this mode to zero (or turn on the Night Mode “Do Not Disturb” setting, which appears to set the volume directly to zero).

This means that you can specify a Night Mode activation time — with volume set to minimum — when you want your fake “sleep timer” to shut down the audio. The stream will keep playing — using data of course — until the set Night Mode termination time or until you manually (e.g., by voice command) set a higher volume level (for example, in the morning). Then you can manually stop the stream if it’s still playing at that point.

Yep, a hack, but it works. And it’s the closest we’ve gotten to a real sleep timer on Google Home so far.

Feel free to contact me if you need more information about this.

–Lauren–

Major Porn Site’s Accessibility Efforts Put Google to Shame

You just can’t make this stuff up. By now you’ve perhaps become somewhat weary of my frequent discussions of Google’s growing accessibility failures, as their site documentation, blogs, and user interfaces continue to devolve in ways that severely disadvantage persons with less than perfect vision or who have other special needs — a rapidly growing category of users that Google just doesn’t seem to consider worthy of their attention. Please see:  “How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)” — https://lauren.vortex.com/2017/06/26/how-google-risks-court-actions-under-the-ada-americans-with-disabilities-act — and other posts linked therein.

Now comes word of a major site that really understands these issues — that really does appreciate these accessibility concerns and is moving in the correct direction, in sharp contrast (no pun intended) with Google.

Here’s the kicker — it’s a porn site — supposedly the planet’s largest “adult entertainment” site, in fact. While I’m not a user of the site myself, tech news publications have confirmed the details of the accessibility press release that Pornhub distributed a few days ago.

Pornhub has rolled out world-class accessibility options across its platform, including visual element changes, narrated videos, and a wide array of keyboard shortcuts. “Enhancing the ability to contrast colors or to flip the text color and the background color or things like that can be very helpful to people who have low vision, which means they’re legally and functionally blind but they have some vision left,” says Danielsen . “Maybe they’re not using text to speech or braille to read the site.”

Bingo. They get it. These are the kinds of options I’ve been urging Google to provide for ages for their desktop services, to no avail.

At first glance, one might wonder why the hell a pornography site would be able to figure this out while Google, compromising some of the smartest people on the planet, keeps moving in exactly the wrong direction when it comes to major accessibility concerns.

Perhaps the explanation is that Google is great at technology but not so great when it comes to understanding the needs of people who aren’t in their target demographics. 

On the other hand, a successful porn site must by definition understand what their users of all stripes want and need. Porn is very much a people-oriented product.

I’m still convinced that the great Googlers at Google can get this right, if they choose to do so and allocate sufficient resources to that end. 

You’re probably expecting some sort of pun to close this post. Accessibility is a serious issue, and when a porn site tops Google in this important area, that’s a matter for sober deliberation, not for screwing around. After all, sometimes a cigar is indeed just a cigar.

–Lauren–

How the Alt-Right Plans to Control Google

The European Union is threatening massive fines against firms like Google, Facebook, and Twitter if contents that the EU considers to be “forbidden” aren’t removed quickly enough. The EU (and now some non-EU countries) are demanding the right to impose global censorship on Google, proclaiming that nobody on Planet Earth can be permitted to view materials that these individual countries wish to hide from their own citizens’ eyes.

The U.S. Congress is feigning newfound horror at their sudden realization that yes, Russians did influence the 2016 elections, and is now suggesting that only our brilliant politicians and bureaucrats know how to fix the problem.

Meanwhile, the horned and spiked-tail demons of the alt-right like Steve Bannon are promoting a wet dream of converting firms like Google into “public utilities” — where search results would be micromanaged for the benefit of racist, sexist antisemites like themselves — and for his president apparently residing amidst a chalked pentagram in the Oval Office.

The common thread that defines this tapestry is third parties demanding to control what these firms are permitted to let you see, to strip these firms of their rights to decide what sorts of contents they do or not wish to host.

We’ve seen this attack ramping up for years. Russia and China are obvious offenders. China’s vast Internet censorship regime is without equal and is the model to which most other countries’ Internet censorship dreams aspire. Where the technology for censorship is less advanced, the reliable mechanism of nightmarish fear can be employed — like Thailand’s recent sentencing of a man to 35 years in prison for Facebook posts critical of their damnable monarchy.

We’ve watched the EU’s escalating demands for years, knowing full well that they’d never be satisfied without the powers of global censorship being bestowed unto them.

And now joining the information control chorus are those worst elements of the alt-right. They’re combining forces with an array of other parties who just can’t get it through their thick skulls that their calls for “search transparency and equality” would result in a lowering of search quality for users to an extent that you might as well try to pick out quality websites from an old copy of the Yellow Pages.

Their collective goal is to create a playground for the worst of low quality sites, scammers, crooks, racists, fake news purveyors, and the rest of their similarly decrepit lowlife scumbags.

The alt-right really started to engage on this when firms like Google and Facebook (and to a lesser extent Twitter) recently and wisely ramped up enforcement of their longstanding prohibitions against hate speech and associated garbage, and began seriously clamping down on fake propaganda search listings and posts.

This terrifies the alt-right. They’ve built their entire business model on leveraging these platforms to spew forth their hateful and lying bile, and feel threatened at the prospect of their diseased spigot being closed off. But they’re still smart enough to align their rants with those on the far left who similarly wish to impose their own viewpoints and censorship regimes onto the rest of us.

The results can be dripping with irony. The calls for making firms like Google “public utilities” are particularly laughable, especially given that right wing politicians have long fought against public utility designations for dominant ISPs — who have spent many decades carving out geographic physical fiefdoms void of competition — where their predatory pricing policies could be maintained.

Yet anyone on the planet who has Internet access can freely connect to firms like Google, Facebook, and Twitter — and use these firms’ services without charge — unless their own governments themselves try to block them! Not only is there no possible case for such firms to be considered as public utilities, but there is no historical precedent of any kind on which to base such a concept.

Once again, it’s all really about governments and bottom of the barrel miscreants trying impose information control on the rest of us.

The scammers and crooks want their sites high in search results. The racists and other hatemongers want to disseminate their filth without limits. Russian trolls squirm at the prospect of not being able to as easily illicitly influence future elections. Politicians dream of imposing ever more total global censorship.

None of these evil players want firms like Google to have the continued ability to control the data on their own platforms for the benefit of users overall and for the broader community.

It’s through their politically motivated, falsified “public interest” claims that the alt-right and other malevolent forces are plotting to control Google, Facebook, Twitter, and more. The thirst for control over these firms even transcends these groups’ individual political differences in many cases.

It is up to us to derail these plots, to not be taken in and rolled over by their propaganda and lies, irrespective of own political and social affiliations.

With strikingly few exceptions, pretty much every time that governments become involved in controversies relating to information control or technology policies, we find that politicians and their minions manage to royally screw up everything, often for everyone except (oh so conveniently) themselves.

We won’t be fooled again.

–Lauren–

Why Won’t Roku Talk About Their Privacy Policies?

UPDATE (November 4, 2017): I ultimately was able to get specific answers from Roku to my questions, via their corporate representatives. The bottom line is that based on that information, I do not consider Roku (or other popular streaming devices) to be suitable for the kind of applications described below, for a variety of reasons. I recommend non-networked, standalone media players (~$30 or less) and an ordinary HDMI cable for these situations.

– – –

Roku makes some excellent, inexpensive video streaming products. I actually have both a Roku Stick and a great Google Chromecast — they each have somewhat different best use cases.

Some days ago the chief security officer at a large firm contacted me with a question about a potential use for Roku units in a corporate environment. They already had Roku boxes or sticks on most of their meeting room monitors, and were concerned about a specific security/privacy issue.

Essentially, they were considering use of the existing Roku units — in conjunction with the Roku Media Player app available to download to those units — to display locally created video assets.

My immediate reaction was to discourage this — much preferring a method that was totally under their control with no chance of leakage outside their own networks — even if that meant direct wiring to the displays. But for a number of reasons he insisted that he wanted to explore the use of Rokus in this application.

Unfortunately, figuring out the privacy and security implications of such a course has so far proven to be nontrivial.

The lengthy online Roku privacy policies page goes into a great deal of detail concerning the information that they collect from your devices — Wi-Fi info, channel data, search data, etc. — all sorts of stuff related to viewing of “conventional” Roku-capable streaming channels.

But the Roku Media Player app is different. It doesn’t play external streams, it play your own video or audio files from your own local server. That Roku privacy page seems to make no specific mention of their Media Player at all.

So I went to the Roku Forum to ask what sorts of data — Usage info? Thumbnail images? EXIF or other metadata? Filenames? — would be collected by Roku (or other third parties) from Roku Media Player usage.

Nothing but crickets. No responses at all. Hmm.

Next, I sent a note with the same information request to the privacy email address that Roku specified for additional questions. 

Silence.

Then I asked on G+ and Twitter. A couple of retweets later, I was contacted by the Roku Support Twitter account. They suggested the privacy email address. When I told them that I’d already tried that, they suggested the Roku legal department email address.

You know where this is going. Still no reply at all.

At this stage I don’t know what’s up with Roku. Are they just so super busy that they can’t at least shoot out an acknowledgement of my queries? Or perhaps they’re scurrying around trying to figure out what their own Media Player actually does before replying to me at all. Or maybe they just hope that I’ll go away if they don’t acknowledge my email. (To paraphrase Bugs Bunny: “They don’t know me very well, do they?”)

To say that this state of affairs doesn’t exactly create a wellspring of confidence in Roku would be a significant understatement. 

Now I want to know the answers to my questions about Roku’s privacy policies irrespective of the query from that original firm that got this all started.

We shall see what transpires.

–Lauren–

When Google Gets Your Location Wrong!

Recently, Google’s desktop news began showing me the weather and local news for Detroit in the state of Michigan, rather than for my corner of Los Angeles as had been Google’s standard practice up to that point. And local Google desktop search results are suddenly all for Detroit instead of Los Angeles — not particularly useful to me.

Meanwhile, my Google Home unit, which always happily reported the weather for my local zip code, now thinks that I’m somewhere in Hawaii instead. And my Chromecast’s screensaver is showing current temperatures that don’t seem to match any of these locales.

What’s going on? Damned if I know! And it’s a real problem, because Google no longer provides any obvious means for you to correct these kinds of errors.

When I started asking around about this, I received a pile of responses from other Google users with similar problems. For some their locations are off a bit, for others way, way wrong, like in my case.

Since some users had actually traveled to those locations at some point in the past, it appears that Google somehow got “stuck” on those old locations. But in my situation, I’ve never been to either Detroit or Hawaii. In fact, I haven’t been out of my L.A. cage in years.

The one device where my location seems to be known correctly by Google at this time is my Android phone — and that’s because the location is being pulled from the phone itself (e.g., the GPS) — as Google itself notes at the bottom of results pages on my phone.

The bottom of those Google pages on desktop say that they’re getting my location from my Internet address. That’s quite bizarre, since that IP address is quite stable for months at a time, and more to the point the public IP address geolocation databases I’ve checked all correctly show me in L.A. (either the city in general or more specifically here in the West San Fernando Valley).

At the bottom of those Google pages there is a “Use precise location” link — but as far as I can tell it has no useful effect. Google keeps insisting that I’m in Detroit in all desktop results.

As for the wrong location data now apparently being used by Chromecast and being reported by Google Home … they just add a layer of confused frosting on top of the foundational cake of these annoying Google location errors.

I realize that there are people who make a hobby out of trying to hide their locations from Google — and that’s their choice. But personally, I value the location-based services that Google provides. It’s frustrating to me — and many other users — that Google does not provide some sort of explicit mechanism for us to update this location data when it goes wrong.

One thing’s for sure, I’m not moving to Detroit, or Hawaii. OK, if I had to choose, Detroit is a fine city, but I don’t do well in cold winters, so Hawaii would likely win out.

But since in reality I’m not planning a move from L.A., I’d sure appreciate Google setting my location as being where I actually am, rather than thousands of miles away.

–Lauren–

UPDATE (September 28, 2017): As of yesterday morning, Google had me “on the move” again. My Google desktop services IP address insisted that I was in “San Diego County” — my Google Home claimed that I was in Las Vegas! Well, “getting closer” (to paraphrase Bullwinkle). Then late last night Home switched to my correct location. This morning I found that desktop services now have my location correctly as well. Did the spacetime continuum shift? Did someone at Google hear me? We may never know.

Google’s Gmail Phishing Warnings and False Positives

Recently there have been messages from my policy-oriented mailing lists (at least one of my lists has been running for more than a quarter century) that Google’s Gmail (and its associated Inbox application) are tagging as likely phishing attempts — scary red warnings and all!

While I don’t yet understand the entirety of this situation, the circumstances behind one particular category of these seems clear, and I’ll admit that I chuckle a bit every time that I think about it now.

One might assume that with Google’s vast AI resources and presumably considerable reputation data relating to incoming mail characteristics, a sophisticated algorithm would be applied to pick out likely email phishing attempts.

In reality, at least in this case, it appears that Google is basically using the venerable old UNIX/Linux “grep” command or some equivalent, and in a rather slipshod way, too.

As you know, I discuss Google policy issues a great deal. Many Google users come to me in desperation for advice on Google-related problems. I write about Google technical matters frequently, as I explained in:

“The Google Account ‘Please Help Me!’ Flood” – https://lauren.vortex.com/2017/09/12/the-google-account-please-help-me-flood

One typical recent message of mine that’s been often getting tagged as a likely phish by Google was:

“Protecting Your Google Account from Personal Catastrophes” –
https://lauren.vortex.com/2017/09/07/protecting-your-google-account-from-personal-catastrophes

Google was apparently convinced that this message was likely a phish, and dramatically warned a subset of my list recipients of this determination.

But as you can see from the message itself, there’s nothing in there asking for users’ account credentials, nothing to suggest that it’s email attempting to fool the recipient in any way.

So why did Google think that this was likely a horrific phishing email?

Here’s why. First, my message had the audacity to mention “Google Account” or “Google Accounts” in the subject and/or body of the message. And secondly, one of my mailing lists is “google-issues” — so some (digest format) recipients received the email from “google-issues-request@vortex.com” (vortex.com is my main domain of very longstanding — it was one of the first 40 dot-com domains ever issued and I’ve been using it continually since then, more than 30 years). 

Note that the character string “google” is on the LEFT side of the @-sign. There’s nothing there trying to fool someone into thinking that the email came from “google.com” or from any other Google-related domain.

Apparently what we’re dealing with here is a simplistic (and frankly, rather haphazard in this respect at least) string-matching algorithm that could have come right out of the early 1970s!

I’ll add that I’ve never found a way to get Google to “whitelist” well-behaved senders against these kinds of errors, so some users see these false phishing warnings repeatedly. I’m certainly not going to change the names of my mailing lists or treat the term “Google Accounts” as somehow verboten!

Google of course wants Gmail to be as safe a user environment as possible, and in general they do a great job at this. But false positives for something as serious as phishing warnings is not a trivial matter — they can scare users into immediately deleting potentially useful or important messages unread, and sully the reputations of innocent senders.

If nothing else, Google needs to establish a formal procedure to deal with these kinds of errors so that demonstrably trustworthy senders can be appropriately whitelisted, rather than face these false positive warnings alarming their recipients repeatedly.

And a bit more sophistication in those phishing detection algorithms would be appreciated as well. 

In the meantime, I expect that some of you will again get Gmail phishing warnings — on THIS message. You know who you are. Sorry about that, Chief!

Oh, by the way, Google seems to have recently become convinced that I live either in Detroit or somewhere in Hawaii (I’ve never been to either). I’d probably prefer the latter over the former, but I’m still right here in L.A. as always. Unfortunately, there’s no obvious way these days to correct these kinds of Google location errors, even when your IP address clearly is correctly geolocating for everyone else — as mine is. If you’ve been having issues with Google-determined location being incorrect for you on desktop Google Search, on your phone, on Chromecasts, or with any other devices (e.g. Google Home), please let me know. Thanks.

–Lauren–

Solving the Gmail “Slow Startup” Problem

I’ve been fighting with slow Gmail startups — hanging starting a few seconds after page initialization and taking a minute or more to release — for quite some time. After some testing with Googler Colm Buckley today, we’ve determined that the problem — in my case at least — was apparently the Hangouts chat panel enabled on the left lower side of the Gmail window.

This appears to be a particular problem when running the Chrome browser. While I’ve also long used the excellent Chrome Hangouts extension, I’ve found the Gmail chat panel handy to keep tabs on the current “presence” status of frequent contacts without having to leave the Hangouts extension window open as well.

As soon as I disabled Chat from the Gmail (gear) settings, the hangs appear to have so far ceased. If you’ve been seeing a similar problem with Gmail, you might want to try this solution. My guess is that Gmail’s old chat panel is on the way toward being deprecated out of existence in any case. Thanks again Colm!

–Lauren–

Google’s Stake Through the Hearts of Obnoxious Autoplay Videos

Yesterday, in “Apple’s New Cookie Policy Looks Like a Potential Disaster for Users” — https://lauren.vortex.com/2017/09/14/apples-new-cookie-policy-looks-like-a-potential-disaster-for-users — I lambasted Apple’s plans to unilaterally deeply tamper with basic Web cookie mechanisms (including first-party cookies) in a manner that won’t actually provide significant new privacy to users in the long run, but will likely create major collateral damage to innocent sites across the Internet. 

I also mentioned that in my view Google has taken a much more rational approach — focused on specific content issues without breaking fundamental network paradigms — and in that context I mentioned their plans to tame obnoxious autoplay videos.

We all know about those videos — often ads — that start  blaring from your speakers as soon as you hit a site. Or even worse, videos that lurk silently on background tabs for some period of time and then suddenly blare at you — often with loud obnoxious music. Your head hits the wall behind you. Your coworkers scatter. Your cat violently pops into the air and contemplates horrific methods of revenge.

As it happens, Google has just blogged on this topic, with a rather mundane post title covering some pretty exciting upcoming changes to their Chrome browser.

In “Unified Autoplay” — https://blog.chromium.org/2017/09/unified-autoplay.html — Google describes in broad terms its planned methodologies for automatically avoiding autoplay in situations where users are unlikely to want autoplay active, and also for providing to users the ability to mute videos manually on a per-site basis.

Frankly, I’ve been long lobbying Google for some way to deal with these issues, and I’m very pleased to see that they’ve gone way beyond a basic functionality by implementing a truly comprehensive approach.

For most users, once this stuff hits Chrome you probably won’t need to take any manual actions at all to be satisfied with the results. If you’re interested in the rather fascinating technical details, there are two documents that you might wish to visit.

Over on Chromium Projects, the write-up “Audio/Video – Autoplay” — https://sites.google.com/a/chromium.org/dev/audio-video/autoplay — goes into a great deal of the nitty-gritty, including the timeline for release of these features to various versions of Chrome.

Another document — “Media Engagement Index” — https://docs.google.com/document/d/1_278v_plodvgtXSgnEJ0yjZJLg14Ogf-ekAFNymAJoU/edit?usp=sharing — explains the learning and deployment methodologies for determining when a user is likely to want autoplay for any given video. This appears to have probably been an internal Google doc — that was switched to public visibility at some point — so it’s especially Googley reading.

There are two important stakeholder categories here. One is well-behaved sites who need to display their videos (including ads — after all, ads are what keep most major free sites running). And of course, the other stakeholder is that user who doesn’t want their lap ripped open by the claws of a kitty who was suddenly terrified by an obnoxious and unwanted autoplay video.

The proof will be in actually using these new Chrome features. But it appears that Google has struck a good working balance for a complex equation incorporating both site and user needs. My kudos to the teams.

–Lauren–