Google Home Is Leaving Elderly and Disabled Users Behind

I continue to be an enormous fan of Google Home — for example, please see my post “Why Google Home Will Change the World” — — from a bit over a year ago.

But as time goes on, it’s becoming obvious that a design decision by Google in the Home ecosystem is seriously disadvantaging large numbers of potential users — ironically, the very users who might otherwise most benefit from Home’s enormous capabilities.

You cannot install or routinely maintain Google Home units without a smartphone and the Google Home smartphone app. There are no practical desktop based and/or remotely accessible means for someone to even do this for you. A smartphone on the same local Wi-Fi network as the device is always required for these purposes.

This means that many elderly persons and individuals with physical or visual disabilities — exactly the people whose lives could be greatly enhanced by Home’s advanced voice query, response, and control capabilities — are up the creek unless they have someone available in their physical presence to set up the device and make any ongoing configuration changes. Additionally, all of the “get more info” links related to Google Home responses are also restricted to the smartphone Home app.

I can see how imposing these restrictions made things faster and easier for Google to bring Home to market. For example, by requiring a smartphone for initial Wi-Fi configuration of Home, they avoided building desktop interfaces for this purpose, and leveraged smartphones’ already configured Wi-Fi environments.

But that’s not a valid excuse. You might be surprised how many people routinely use the Internet but who do not have smartphones, or who have never used text messaging on conventional cell phones — or hell, who don’t even have cell phones at all!

Now, one could argue that perhaps this wouldn’t matter so much if we were talking about an app to find rave parties or the best surfing locations. But the voice control, query, and response capabilities of Home are otherwise perfectly suited to greatly improve the lives of the very categories of users who are shut out from Home, unless they have someone with a smartphone in their physical presence to get the devices going and perform ongoing routine configuration changes and other non-voice interactions. 

In fact, many persons have queried me with great excitement about Home, only to be terribly disappointed to learn that smartphones were required and that they were being left behind by Google, yet again.

I have in the past asked the question “Does Google Hate Old People” — — and I’m not going to rehash that discussion here.  Perhaps Google already has plans in the works to provide non-smartphone access for these key Home functionalities — if so I haven’t heard about them, but it’s clearly technically possible to do.

I find it distressing that this all seems to follow Google’s pattern of concentrating on their target demographics at the expense of large (and in many cases rapidly growing) categories of users who get left further and further behind as a result.

This is always sad — and unnecessary — but particularly so with Home, given that the voice-operated Home ecosystem would otherwise seem tailor-made to help these persons in so many ways. 

And at the risk of being repetitious, since I’ve been making the same statement quite a bit lately: Google is a great company. Google can do better than this.


Facebook’s Big, Bad Lie About Age Discrimination

Sometimes Facebook’s manipulative tendencies are kept fairly well below the radar. But in some cases, their twisted sensibilities are so blatant that even their own public explanations immediately ring incredibly hollow.

Such is the case with their response yesterday to a ProPublica report accusing their advertising systems of enabling explicit (and in the opinion of many experts, illegal) age discrimination by advertisers seeking employees.

This one is as obvious as Bozo’s bright red nose. Facebook permits advertisers to target employment ads to specific age groups. Facebook users who are not in the designated groups would typically have no way to know that the ads existed at all!

Facebook’s attempted explanation is pathetic:

“US law forbids discrimination in employment based on age, race, gender and other legally protected characteristics. That said, simply showing certain job ads to different age groups on services like Facebook or Google may not in itself be discriminatory — just as it can be OK to run employment ads in magazines and on TV shows targeted at younger or older people.”

The evil duplicity in this statement hits you right in the face. Sure, advertisers run ads on TV shows and in magazines that are oriented toward certain age groups. But there’s nothing stopping adults of other ages from reading those magazines or watching those shows if they choose to do so — and seeing those ads.

By contrast, in Facebook’s tightly controlled, identity-focused ecosystem, the odds are practically nil that you’ll even realize that particular ads exist if you don’t fall into the targeted range. The old saying holds: “You can’t know what you don’t know.”  Facebook’s comparison with traditional media is false and ridiculous. 

ProPublica notes that other large Web services, including Google and others, permit ad targeting by age.  But unlike Google — where many services can be used without logging in and pseudonyms can be easily created — Facebook is almost entirely a walled garden — logins and your true identity are required under their terms of service to do pretty much anything on their platform.

Given Facebook’s dominance in this context, it’s easy to see why their response to these ad discrimination complaints is being met with such ridicule. 

It’s clear that this kind of Facebook age-based ad targeting by advertisers is an attempt to avoid the negative publicity and legal ramifications of explicitly stating the ages of their desired applicants. They hope to accomplish the same results by preventing anyone of the “wrong” ages from even seeing the ads — and the excuses from these advertisers denying this charge are nothing but sour grapes at their schemes (empowered by Facebook) being called out publicly.

Preventing adult users of any age from seeing employment ads is unethical and just plain wrong. If it’s not illegal, it should be.

And that’s the truth.


A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly

This evening, a reader of my blog post from earlier this year (“YouTube’s Dangerous and Sickening Cesspool of ‘Prank’ and ‘Dare’ Videos” –, asked if I knew about YouTube’s “laxative” prank and dare videos. Mercifully, I didn’t know about them. Unfortunately, now I do. And while it’s all too easy to plow the fields of toilet humor when it comes to topics like this, it’s really not at all a funny subject.

In fact, it can be deadly.

Some months back I had heard about a boy who — on a dare — ate 25 laxative brownies in one hour. The result was near total heart and kidney failure. He survived, but just barely.

What I didn’t realize until today is that this was far from an isolated incident, and that there is a stunningly vast corpus of YouTube videos explicitly encouraging such dares — and even worse, subjecting innocent victims to “pranks” along very much the same lines.

Once I began to look into this category, I was shocked by its sheer scope.  For example, a YouTube search for:

laxative prank

currently yields me 132,000 results. Of those, over 2,000 were uploaded in the last month, over 300 in the last week, and 10 just today!

As usual, it’s difficult to know what percentage of these are fakes and which are real. But this really matters not, because virtually all of them have the effect of encouraging impressionable viewers into duplicating their disgusting and dangerous feats.

Many of these YouTube videos are very professionally and slickly produced, and often are on YouTube channels with very high subscriber counts. It also appears common for these channels to specialize in producing a virtually endless array of other similar videos in an obvious effort to generate a continuing income stream — which of course is shared with Google itself.

Is there any possible ethical justification for these videos being hosted by Google, and in many cases also being directly monetized?

No, there is not.

And this is but the tip of the iceberg.

YouTube is saturated with an enormous range of similarly disgusting and often dangerous rot, and the fact that Google continues to host this material provides a key continuing incentive for ever larger quantities of such content to be produced, making Google directly culpable in its spread.

I spent enough time consulting internally with Google to realize that there are indeed many situations where making value judgments regarding YouTube content can be extremely difficult, to say the least.

But many of these prank and dare videos aren’t close calls at all — they are outright dangerous and yes, potentially deadly. And as we’ve seen they are typically extremely easy to find.

The longer that these categories are permitted to fester on YouTube, the greater the risks to Google of ham-fisted government regulatory actions that frankly are likely to do more harm than good.

Google can do so much better than this.


Perhaps the Best Feature Ever Comes to Chrome: Per Site Audio Muting!

UPDATE (January 25, 2018): This feature is now available in the standard, stable, non-beta version of Chrome!

– – –

Tired of sites that blare obnoxious audio at you from autoplay ads or other videos, often from background tabs, sometimes starting long after you’ve moved other tabs to the foreground? Aren’t these among the most disgustingly annoying of sites? Want to put them in their place at last?

Of course you do.

And as promised by Google some months ago, the new Chrome browser beta — I’m using “Version 64.0.3282.24 (Official Build) beta (64-bit)” on Ubuntu Linux — provides the means to achieve this laudable goal.

There are a number of ways to use this truly delightful new feature.

If you right click on the address bar padlock (or for unencrypted pages, usually an “i” icon), you may see a sound “enable/disable” link on the settings tab that appears, or you may need to click on “site settings” from that tab. In the former case, you can choose “allow” or “block” directly, in the latter case, you can do this from the “sound” entry on the full site settings page that appears.

There’s an easier way, too. Right click on the offensive site’s tab. You can choose “Mute site” or “Unmute site” from there. 

These mute selections are “sticky” — they will persist between invocations of the browser — exactly the behavior that we want.

You can also manually enter a list of sites to mute (and delete existing selections) at the internal address: 


And as a special bonus, considering enabling the longstanding “Tab audio muting UI control” experiment in Chrome on the page at the internal address:


This lets you mute or unmute a specific tab by clicking on the tab “speaker” icon, without changing the underlying site mute status — perfect if you want to hear the audio for a specific video at a site that you normally want to keep firmly gagged. 

I have long been agitating for a site mute feature in Chrome — my great thanks to the Chrome team for this excellent implementation!

In due course we can expect this new capability to find its way from Chrome beta to stable, but for now if you’re running the latest beta version, you should be able to starting enjoying this right now.

You’re going to love it.


Google Wisely Pauses Move to Impose Accessibility Restrictions

Last month, in “Google’s Extremely Shortsighted and Bizarre New Restrictions on Accessibility Services”  — — I was highly critical of Google’s move to restrict Android app accessibility services only to apps that were specifically helping disabled persons. 

Google’s actions were assumed to be aimed at preventing security problems that can result when these accessibility services are abused — but these services also implement critical functionalities to other well-behaved apps that cannot currently be provided to most Android users without the use of those services.

My summary statement in that post regarding this issue was:

“The determining factor shouldn’t be whether or not an app is using an accessibility service function within the specific definition of helping a particular class of users, but rather whether or not the app is behaving in an honest and trustworthy manner when it uses those functions.”

I’m pleased to report that Google is apparently now in the process of reevaluating their entire stance on this important matter. Developers have received a note from Google announcing that they are “pausing” their decision, and including this text:

“If you believe your app uses the Accessibility API for a responsible, innovative purpose that isn’t related to accessibility, please respond to this email and tell us more about how your app benefits users. This kind of feedback may be helpful to us as we complete our evaluation of accessibility services.”

Bingo. This is exactly the approach that Google should be taking to this situation, and I’m very glad to see that the negative public reactions to their earlier announcement have been taken to heart.

We’ll have to wait and see what Google’s final determinations are regarding this area, but my thanks to the Google teams involved for giving the feedback the serious consideration that it deserves.


Risks of Google Home and Amazon Echo as 24/7 Bugs

One of the most frequent questions that I receive these days relates to the privacy of “smart speaker” devices such as Google Home, Amazon Echo, and other similar devices appearing from other firms. 

As these devices proliferate around us — driven by broad music libraries, powerful AI assistants, and a rapidly growing pantheon of additional capabilities — should we have privacy concerns?

Or more succinctly, should we worry about these “always on” microphones being subverted into 24/7 bugging devices?

The short and quick answer is yes. We do need to be concerned.

The full and more complete answer is decidedly more complicated and nuanced.

The foundational truth is fairly obvious — if you have microphones around, whether they’re in phones, voice-controlled televisions, webcams, or the rising category of smart speaker devices, the potential for bugging exists, with an obvious focus on Internet-connected devices.

Indeed, many years ago I began writing about the risks of cellphones being used as bugs, quite some time before it became known that law enforcement was using such techniques, and well before smartphone apps made some forms of cellphone bugging trivially simple.

And while I’m an enthusiastic user of Google Home devices (I try to avoid participating in the Amazon ecosystem in any way) the potential privacy issues with smart speakers have always been present — and how we deal with them going forward is crucial.

For more background, please see:

“Why Google Home Will Change the World” –

Since I’m most familiar with Google’s devices in this context, I will be using them for my discussion here, but the same sorts of issues apply to all microphone-enabled smart speaker products regardless of manufacturer.

There are essentially two categories of privacy concerns in this context.

The first is “accidental” bugging. That is, unintended collection of voice data, due to hardware and/or firmware errors or defects.

An example of this occurred with the recent release of Google’s Home Mini device. Some early units could potentially send a continuous stream of audio data to Google, rather than the intended behavior of only sending audio after the “hot word” phrase was detected locally on the unit (e.g. “Hey Google”).  The cause related to an infrequently used manual switch on the Mini, which Google quickly disabled with a firmware update.

Importantly, the Mini gave “clues” that something was wrong. The activity lights reportedly stayed on — indicating voice data being processed — and the recorded data showed up in user “My Activity” for users’ inspection (and/or deletion). For more regarding Google’s excellent My Activity system, please see:

“The Google Page That Google Haters Don’t Want You to Know About” –

Cognizant of the privacy sensitivities surrounding microphones, smart speaker firms have taken proactive steps to try avoid problems. As I noted above, the normal model is to only send audio data to the cloud for processing after hearing the “hot word” phrase locally on the device. 

Also, these devices typically include a button or switch that users can employ to manually disable the microphones.

I’ll note here that Google lately took a step backwards in this specific respect. Until recently, you could mute the microphone by voice command, e.g., “OK Google, mute the microphone.” But now Google has disabled this voice command, with the devices replying that you must use the switch or button to disable the mic.

This is not a pro-privacy move. While I can understand Google wanting to avoid unintended microphone muting that would then require users to manually operate the control on the device to re-enable the mic, there are many situations where you need to quickly disable the mic (e.g. during phone calls, television programs, or other situations where Google Home is being discussed) to avoid false triggering when the hotword phrase happens to be mentioned. 

The correct way of dealing with this situation would be to make voice-operated microphone muting capability an option in the Google Home app. It can default to off, but users who prefer the ability to quickly mute the microphone by voice should be able to enable such an option.

So far we’ve been talking about accidental bugging. What about “purposeful” bugging?

Now it really starts to get complicated. 

My explicit assumption is that the major firms producing these devices and their supporting infrastructures would never willingly engage in purposeful bugging of their own accord. 

Unfortunately, in today’s world, that’s only one aspect of the equation.

Could these categories of devices (from any manufacturers) be hacked into being full-time bugs by third-parties unaffiliated with these firms? We have to assume that the answer in theory (and based on some early evidence) is yes, but we can also assume that these firms have made this possibility as unlikely as possible, and will continually work to make such attacks impractical.

Sad to say, of much more looming concern is governments going to these firms and ordering/threatening them into pushing special firmware to targeted devices (or perhaps to devices en masse) to enable bugging capabilities. In an age where an admitted racist, Nazi-sympathizing, criminal serial sexual predator resides in the White House and controls the USA law enforcement and intelligence agencies, we can’t take any possibilities off of the table. Google for one has a long and admirable history of resisting government attempts at overreach, but — as just one example — we don’t know how far the vile, lying creature in the Oval Office would be willing to go to achieve his evil ends.

Further complicating this analysis is a lack of basic public information about the hardware/firmware structure of these devices.

For example, is it possible in Google Home devices for firmware to be installed that would enable audio monitoring without blinking those activity lights? Could firmware changes keep the microphone active even if the manual disable button or switch has been triggered by the user, causing the device mic to appear disabled when it was really still enabled?

These are largely specific hardware/firmware design questions, and so far my attempts to obtain information about these aspects from Google have been unsuccessful.

If you were hoping for a brilliant, clear-cut, “This will solve all of these problems!” recommendation here, I’m afraid that I must disappoint you.

Beyond the obvious suggestion that the hardware of these devices should be designed so that “invisible bugging” potentials are minimized, and the even more obvious (but not very practical) suggestion of unplugging the units if and when you’re concerned (’cause let’s face it, the whole point is for them to be on the ready when you need them!), I don’t have any magic wand solutions to offer here. 

Ultimately, all of us — firms like Google and Amazon, their users, and the community at large — need to figure out where to draw the lines to achieve a reasonable balance between the vast positive potential of these devices and the very real potential risks that come with them as well.

Nobody said that this stuff was going to be easy. 

Be seeing you.


In the Amazon vs. YouTube War, Google is Right — and Wrong

You’ve probably heard that there’s an escalating “YouTube War” between Amazon and Google, that has now led to Google cutting of users of Amazon’s Fire and Echo Show products from YouTube, leaving legions of confused and upset users in the wake.

I’m no fan of Amazon. I intensely dislike their predatory business practices and the way that they treat many of their workers. I studiously avoid buying from Amazon.

Google has a number of completely legitimate grievances with Amazon. The latter has refused to carry key Google products that compete with Amazon products, while still designing those Amazon devices to access Google services like YouTube. Amazon has also played fast and loose with the YouTube Terms of Service in a number of ways.

I can understand Google finally getting fed up with this kind of Amazon behavior. Google is absolutely right to be upset.

However, Google is wrong in the approach that they’ve taken to deal with these issues, and this may do them considerable ongoing damage, even long after the current dispute is settled.

Cutting those Amazon device users off from YouTube with essentially a “go access YouTube some other way” message is not buying any good will from those users — exactly the opposite, in fact.

These users aren’t concerned about Google’s marketing issues, they just want to see the programming that they bought their devices to access — and YouTube is a major part of that.

As the firm that’s cutting off these users from YouTube, it’s Google that will take the brunt of user anger, and the situation unnecessarily sows distrust about Google’s behavior in the future. This can impact users’ overall feelings about Google in negative ways that go far beyond YouTube.

Worse, this kind of situation is providing long-term ammunition to Google haters who are looking for any excuses to try bring antitrust or other unwarranted regulatory focus onto Google itself.

Essentially, Amazon laid a trap for Google in this instance, and Google walked right into it.

There is a much better approach available to Google for dealing with this.

Rather than cutting off those Amazon device users, permit them to continue accessing YouTube, but only after presentation of a brief interstitial very succinctly explaining Google’s grievances with Amazon. Rather than making enemies of those users, bring them around to an understanding of Google’s point of view.

But above all, don’t punish those Amazon users by cutting them off from YouTube as you’re doing now.

Your righteous battle is with Amazon. But those Amazon device users should be treated as your allies in this war, not as your enemies!

And that’s the truth.


Google Agrees: It’s Time for More Humans Fighting YouTube Hate and Child Exploitation Videos

Regular readers of my missives have probably grown tired of my continuing series of posts relating to my concerns regarding particular categories of videos that have increasingly contaminated Google’s YouTube platform.

Very briefly: I’m one of YouTube’s biggest fans. I consider YT to be a wonder of the world, both technologically and in terms of vast swathes of its amazing entertainment and educational content. I would be horrified to see YouTube disappear from the face of the planet.

That said, you know that I’ve been expressing increasing concerns regarding extremist and other hate speech, child exploitation, and dangerous prank/dare videos that increasingly proliferate and persist on YouTube, often heavily monetized with ads.

I have never ascribed evil motives to Google in these regards. YouTube needs to bring in revenue both for its own operations and to pay creators — and the absolute scale of YouTube is almost unimaginably enormous.

At Google’s kind of scale, it’s completely understandable that Google has a strong institutional bias toward automated, algorithmic systems to deal with content of all sorts.

However, I have long argued that the changing shape of the Internet requires more humans to “ride herd” on those algorithms, to fill in the gaps where algorithms tend to slump, and to provide critical sanity checking. This is of course an expensive proposition, but my view has been that Google has the resources to do this, given the will to do so.

I’m pleased to report that Google/YouTube has announced major moves in exactly these sorts of directions that I have long recommended:

YouTube will hire *human* video reviewers to a total of over 10K in 2018, will expand  liaisons with outside expert groups and individuals, and will tighten advertising parameters (including more human curation), among other very positive steps.

At YouTube scale, successful execution of these plans will be anything but trivial, but as I’ve said about various issues, Google *can* do this!

My thanks to the YouTube teams, and especially to YouTube CEO Susan Wojcicki, for these very welcome moves that should help to assure a great future both for YouTube and its many users!


Easy Access to SSL Certificate Information Is Returning to Google’s Chrome Browser

You may recall that back early this year I expressed concerns that direct, obvious access to SSL encryption security certificate information had been removed from Google’s Chrome browser:

“Here’s Where Google Hid the SSL Certificate Information That You May Need” –

As I noted then, there are frequent situations where it’s extremely useful to inspect the SSL certificate info, because the use of SSL (https: — that is, the mere presence of a “green padlock” on a connection) indicates that the connection is encrypted, but that’s all. The padlock alone does not render any sort of judgment regarding the authenticity or legitimacy of the site itself — but the details in an SSL cert can often provide useful insight into these sorts of aspects.

After the change to Chrome that I reported last January, it was not longer possible to easily obtain the certificate data by simply doing the obvious thing — clicking the green padlock and then an additional click to see the cert details. It was still possible to access that data, but doing so required manipulation of the browser “developers tools” panels which are (understandably) not obvious to most users.

I’m pleased to report that easy access to the SSL cert data via the green padlock icon is returning to Chrome. It is already present in the Linux beta version that I run, and would be expected to reach Chrome’s stable versions on all platforms in due course. 

With this feature in place, you simply click the green padlock icon and then click the obvious “Valid” link under the “Certificate” section at the top. The SSL cert data opens right up for quick and direct inspection. The version of Chrome that you’re running currently may not have this feature implemented quite yet, but it’s on the way.

My thanks to the Chrome team!