Risks of Google Home and Amazon Echo as 24/7 Bugs

One of the most frequent questions that I receive these days relates to the privacy of “smart speaker” devices such as Google Home, Amazon Echo, and other similar devices appearing from other firms. 

As these devices proliferate around us — driven by broad music libraries, powerful AI assistants, and a rapidly growing pantheon of additional capabilities — should we have privacy concerns?

Or more succinctly, should we worry about these “always on” microphones being subverted into 24/7 bugging devices?

The short and quick answer is yes. We do need to be concerned.

The full and more complete answer is decidedly more complicated and nuanced.

The foundational truth is fairly obvious — if you have microphones around, whether they’re in phones, voice-controlled televisions, webcams, or the rising category of smart speaker devices, the potential for bugging exists, with an obvious focus on Internet-connected devices.

Indeed, many years ago I began writing about the risks of cellphones being used as bugs, quite some time before it became known that law enforcement was using such techniques, and well before smartphone apps made some forms of cellphone bugging trivially simple.

And while I’m an enthusiastic user of Google Home devices (I try to avoid participating in the Amazon ecosystem in any way) the potential privacy issues with smart speakers have always been present — and how we deal with them going forward is crucial.

For more background, please see:

“Why Google Home Will Change the World” –https://lauren.vortex.com/2016/11/10/why-google-home-will-change-the-world

Since I’m most familiar with Google’s devices in this context, I will be using them for my discussion here, but the same sorts of issues apply to all microphone-enabled smart speaker products regardless of manufacturer.

There are essentially two categories of privacy concerns in this context.

The first is “accidental” bugging. That is, unintended collection of voice data, due to hardware and/or firmware errors or defects.

An example of this occurred with the recent release of Google’s Home Mini device. Some early units could potentially send a continuous stream of audio data to Google, rather than the intended behavior of only sending audio after the “hot word” phrase was detected locally on the unit (e.g. “Hey Google”).  The cause related to an infrequently used manual switch on the Mini, which Google quickly disabled with a firmware update.

Importantly, the Mini gave “clues” that something was wrong. The activity lights reportedly stayed on — indicating voice data being processed — and the recorded data showed up in user “My Activity” for users’ inspection (and/or deletion). For more regarding Google’s excellent My Activity system, please see:

“The Google Page That Google Haters Don’t Want You to Know About” –  https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

Cognizant of the privacy sensitivities surrounding microphones, smart speaker firms have taken proactive steps to try avoid problems. As I noted above, the normal model is to only send audio data to the cloud for processing after hearing the “hot word” phrase locally on the device. 

Also, these devices typically include a button or switch that users can employ to manually disable the microphones.

I’ll note here that Google lately took a step backwards in this specific respect. Until recently, you could mute the microphone by voice command, e.g., “OK Google, mute the microphone.” But now Google has disabled this voice command, with the devices replying that you must use the switch or button to disable the mic.

This is not a pro-privacy move. While I can understand Google wanting to avoid unintended microphone muting that would then require users to manually operate the control on the device to re-enable the mic, there are many situations where you need to quickly disable the mic (e.g. during phone calls, television programs, or other situations where Google Home is being discussed) to avoid false triggering when the hotword phrase happens to be mentioned. 

The correct way of dealing with this situation would be to make voice-operated microphone muting capability an option in the Google Home app. It can default to off, but users who prefer the ability to quickly mute the microphone by voice should be able to enable such an option.

So far we’ve been talking about accidental bugging. What about “purposeful” bugging?

Now it really starts to get complicated. 

My explicit assumption is that the major firms producing these devices and their supporting infrastructures would never willingly engage in purposeful bugging of their own accord. 

Unfortunately, in today’s world, that’s only one aspect of the equation.

Could these categories of devices (from any manufacturers) be hacked into being full-time bugs by third-parties unaffiliated with these firms? We have to assume that the answer in theory (and based on some early evidence) is yes, but we can also assume that these firms have made this possibility as unlikely as possible, and will continually work to make such attacks impractical.

Sad to say, of much more looming concern is governments going to these firms and ordering/threatening them into pushing special firmware to targeted devices (or perhaps to devices en masse) to enable bugging capabilities. In an age where an admitted racist, Nazi-sympathizing, criminal serial sexual predator resides in the White House and controls the USA law enforcement and intelligence agencies, we can’t take any possibilities off of the table. Google for one has a long and admirable history of resisting government attempts at overreach, but — as just one example — we don’t know how far the vile, lying creature in the Oval Office would be willing to go to achieve his evil ends.

Further complicating this analysis is a lack of basic public information about the hardware/firmware structure of these devices.

For example, is it possible in Google Home devices for firmware to be installed that would enable audio monitoring without blinking those activity lights? Could firmware changes keep the microphone active even if the manual disable button or switch has been triggered by the user, causing the device mic to appear disabled when it was really still enabled?

These are largely specific hardware/firmware design questions, and so far my attempts to obtain information about these aspects from Google have been unsuccessful.

If you were hoping for a brilliant, clear-cut, “This will solve all of these problems!” recommendation here, I’m afraid that I must disappoint you.

Beyond the obvious suggestion that the hardware of these devices should be designed so that “invisible bugging” potentials are minimized, and the even more obvious (but not very practical) suggestion of unplugging the units if and when you’re concerned (’cause let’s face it, the whole point is for them to be on the ready when you need them!), I don’t have any magic wand solutions to offer here. 

Ultimately, all of us — firms like Google and Amazon, their users, and the community at large — need to figure out where to draw the lines to achieve a reasonable balance between the vast positive potential of these devices and the very real potential risks that come with them as well.

Nobody said that this stuff was going to be easy. 

Be seeing you.

–Lauren–

In the Amazon vs. YouTube War, Google is Right -- and Wrong
Google Wisely Pauses Move to Impose Accessibility Restrictions