3D Printed Wall Mount for the Full-Sized Google Home

Since the 3D printed wall mount for my Google Home Mini worked out quite nicely (details here), I went ahead yesterday and printed a different type of wall mount for my original Google Home (which is more suited for music listening given its larger and more elaborate speaker system — it even has decent bass response.)

Performance of the Google Home when mounted on the wall seems exemplary, both in terms of audio reproduction and the performance of its integral microphones. 

The surface of the mount meshes with the contours on the bottom of the Google Home unit, providing additional stability.

At the end of this post, I’ve included photos of the printed mount itself, the mount on the wall with Google Home installed, and a very brief video excerpt of the printing process. 

The model for this mount is from “westlow” at: https://www.thingiverse.com/thing:2426589 (I used the “V2” version).

As always, if you have any questions, please let me know. 

Be seeing you.

–Lauren–

(Please click images to enlarge.)

Some Background on 3D Printing Gadgets for the Google Home Mini

UPDATE (October 30, 2017): 3D Printed Wall Mount for the Full-Sized Google Home

– – –

Over on Google+ I recently posted several short items regarding a tiny plastic mount that I 3D printed a couple of days ago to hang my new Google Home Mini on my wall (see 2nd and 3rd photos below, for the actual model file please see: https://www.thingiverse.com/thing:2576121 by “Jakewk13”).

This virtually invisible wall mount is perfectly designed for the Mini and couldn’t be simpler. Technically, the Mini is upside down when you use this mount, but of course it works just fine. Thanks Google for sending me a Mini for my ongoing experiments!

I’ve since received quite a few queries about my printing facilities, such as they are.

So the 1st photo below shows my 3D printer setup. Yes, it looks like industrial gear from one of the “SAW” torture movies, but I like it that way. This is an extremely inexpensive arrangement, where I make up for the lack of expensive features with a fair degree of careful ongoing calibration and operational skill, but it serves me pretty well. I can’t emphasize enough how critical accurate calibration is with 3D printing, and there’s a significant learning curve involved.

The basic unit started as a very cheap Chinese clone printer kit that I built and mounted on that heavy board for stability. Then, hardware guy that I’ve always been, I started modifying. As is traditional, many of the additions and modifications were themselves printed on that printer. This includes the filament reel support brackets, calibration rods, filament guide, inductive sensor mount, and more. I installed an industrial inductive sensor at the forward left of the black extruder unit, to provide more precise Z-axis homing and to enable automatically adjusted print extrusion leveling.

I replaced the original cruddy firmware with a relatively recent Repetier dev build, which also enabled the various inductive sensor functions. I had to compile out the SD card support to make room for this build in my printer controller — but I never used the SD card on the printer (intended for standalone printing) anyway.

On the build platform, I use ordinary masking tape, that gets a thin coat of glue stick immediately after I put the tape down. The tape and glue can last for quite a few prints before needing replacement.

I mainly print PLA filament. I never touch ABS — it warps, its fumes smell awful and are highly toxic.

I almost always print at an extruder temperature of 205C and a bed temperature of 55C.

The printer is driven by Repetier Server which runs on 14.04 Ubuntu via Crouton running on an older CrOS Chromebook. I typically use Linux Cura for model slicing.

I know, it’s all laughably inexpensive and not at all fancy by most people’s standards, but it does the job for me when I want to hang a Google gadget on the wall or need the odd matter-antimatter injector guide servo nozzle in a hurry.

Yep, it really is the 21st century.

–Lauren–

(Please click images to enlarge.)

Understanding Google’s New Advanced Protection Program for Google Accounts


I’ve written many times about the importance of enabling 2-factor authentication on your Google accounts (and other accounts, where available) as a basic security measure, e.g. in “Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good” — https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi — and in other posts too numerous to list here.  

Given this history, I’ve now begun getting queries from readers regarding Google’s newly announced and very important “Advanced Protection Program” (APP) for Google accounts — most queries being variations on “Should I sign up for it?”

The APP description and “getting started” page is at:

https://landing.google.com/advancedprotection/

It’s a well designed page (except for the now usual atrocious low contrast Google text font) with lots of good information about this program. It really is a significant increase in security that ordinary users can choose to activate, and yes, it’s free (except for the cost of purchasing the required physical security keys, which are available from a variety of vendors).

But back to that question. Should you actually sign up for APP?

That depends.

For the vast majority of Google users, the answer is likely no, you probably don’t actually need it, given the additional operational restrictions that it imposes.

However, especially for high-profile users who are most likely to be subjected to specifically targeted account attacks, APP is pretty much exactly what you need, and will provide you with a level of account security typically unavailable to most (if any) users at other commercial sites.

Essentially, APP takes Google’s existing 2-factor paradigm and restricts it to only its highest security components. So while USB/Bluetooth security keys are the most secure option for conventional 2-factor use on Google accounts, other 2-factor options like SMS text messages (to name just one) continue to also be available. This provides maximum flexibility for most users, and minimizes the chances of their accidentally locking themselves out of their Google accounts.

APP requires the use of these security keys — the other options are no longer available. If you lose the keys, or can’t use them for some reason, you’ll need to use a special Google account recovery procedure that could take up to several days to complete — a rigorous process to assure that it’s really you trying to regain access to the account.

There are other security-conscious restrictions to your account as well if you enable APP. For example, third-party apps’ access to your account will be significantly restricted, preventing a range of situations where users might otherwise accidentally grant overly broad permissions from outside apps to Google accounts.

It’s important to remember that there do exist situations where you are likely to not be able to use security keys. Public computers (and ironically, computers in high security environments) often have unusable USB ports and have Bluetooth locked in a disabled mode. These can be important considerations for some users.

Cutting to the chase, Google’s standard 2-factor systems are usually going to be quite good enough for most users and offer maximum flexibility — of course only if you enable them — which, yeah, you really should have done by now!

But in special cases for particularly high-profile or otherwise vulnerable Google users, the Advanced Protection Program could be the proverbial godsend that’s exactly what you’ve been hoping for.

As always, feel free to contact me if you have any additional questions about this.

Be seeing you.

–Lauren–

Explaining the Chromebook Security Scare in Plain English: Don’t Panic!

Yesterday I pushed out to various of my venues a Google notice regarding a security vulnerability relating to a long list of Chrome OS based devices (that is, “CrOS” on Chromebooks and Chromeboxes). That notice (which is titled more like a firmware upgrade advisory than a security warning per se) is at:

https://sites.google.com/a/chromium.org/dev/chromium-os/tpm_firmware_update

While that page is generally very well written, it is still quite technical in its language. Unfortunately, while I thought it was important yesterday to disseminate it as quickly as possible, I was not in a position to write any significant additional commentary to accompany those postings at that time. 

Today my inbox is filled with concerned queries from Chromebook and Chromebox users regarding this issue, who found that Google page to be relatively opaque.

Does this bug apply to us? Should we rush to upgrade? What happens if something goes wrong? Should our school be concerned — we’ve got lots of students using Chromebooks, what should we do? Help!

Here’s the executive summary — perhaps the way that Google should have said it: DON’T PANIC! — especially if you have strong passwords. Most of you don’t really have to worry much about this one. But please do keep reading, especially and definitely if you’re a corporate user or someone else in a particularly high security environment.

This is not a large-scale attack vulnerability, where millions of devices can be easily compromised. In fact, even in worst case scenarios, the attack is computationally “expensive” — meaning that much more “targeted” attacks, e.g., against perceived “high-value” individuals, would be the focus.

Google has already take steps in their routine Chrome OS updates to mitigate some aspects of this problem and to make it an even less practical attack from the standpoint of most individual users, though the vulnerability cannot be completely closed via this approach for everyone.

The underlying problem is a flaw in the firmware (the programming) of a specific chip in these devices, called a TPM. Google didn’t expand that acronym in their notice, so I will — it stands for Trusted Platform Module.

The TPM is a crucial part of the cryptographic system that protects the data on Chrome OS devices. It’s sort of the “roach motel” of security chips — certain important crypto key data gets in there but can’t get out (yet can still be utilized appropriately by the system).

The TPM firmware flaw in question makes the possibility of “brute force” guessing of internal crypto keys more practical in a targeted sense, but again, not at large scale. And in fact, if you have a weak password, that’s a far greater vulnerability for most users than this TPM bug ever would have been. Google’s mitigations of this problem already provide good protection for most individual users with strong passwords.

C’mon, switch to a strong password already! You’ll sleep better.

It’s really in high security corporate environments and similar situations where the TPM flaw is of more concern, particularly where individual users may be reasonably expected to be targets of security attacks.

Where firms or other organizations are using their own crypto certificates via the TPM to allow corporate or other access (or use “Verified Access” for enterprise-managed authentication) the TPM bug is definitely worthy of quite serious consideration at least.

Ordinary users can upgrade their TPM firmware if they wish (in enterprise-managed environments, you will likely need administrative permission to perform this). The procedure uses the “powerwash” function of the devices, as explained on the Google page.

But as also noted there, this is not a risk-free procedure. Powerwash wipes all user data from the device, and devices can fail to boot if things go wrong during the process. There are usually ways to recover even from that eventuality, but you probably don’t want to be in that position if you can reasonably avoid it.

For the record, I am personally not upgrading the TPM firmware on the Chrome OS devices that I use or manage at this time. They all have decent passwords, and especially for remote users I won’t risk the powerwash sequence for now.

I am of course monitoring the situation and will re-evaluate as necessary. Google is working on a way to update the TPM firmware without a powerwash — if that comes to pass it will significantly change the equation. And of course if I had to use any of these devices in an environment where TPM-based crypto certificates were required, I’d consider a powerwash for TPM firmware upgrade to be a mandatory prerequisite.

In the meantime, be aware of the situation, think about it, but once again, don’t panic!

–Lauren–

Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–

Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home


UPDATE (October 17, 2017): Google Home, nearly a year after its initial release, finally has a real sleep timer! Some readers have speculated that this popular post that you’re viewing right here somehow “shamed” Google into final action on this. I wouldn’t go that far. But I’ll admit that it’s somewhat difficult to stop chuckling a bit right now. In any case, thanks to the Home team!

– – –

I’ve long been bitching about Google Home’s lack of a basic function that clock radios have had since at least the middle of the last century — the classic “sleep timer” for playing music until a specified time or until a specific interval has passed. I suspect my rants about this have become something of a chuckling point around Google by now.

Originally, sleep timer type commands weren’t recognized at all by GH, but eventually it started admitting that the concept at least exists.

A somewhat inconvenient but seemingly serviceable way to fake a sleep timer is now possible with Google Home. I plead guilty, it’s a hack. But here we go.

Officially, GH still responds with “Sleep timer is not yet supported” when you give commands like “Stop playing in an hour.”

BUT, a new “Night Mode” has appeared in GH firmware, at least since revision 99351 (I’m in the preview program, you may or may not have that revision yet, or it may have appeared earlier in some cases).

This new mode — in the device settings reachable through the Home app — permits you to specify a maximum volume level during specified days and hours. While the description doesn’t say this explicitly, it turns out that this affects music streams as well as announcements (except for alarms and timers). And, you can set the maximum volume for this mode to zero (or turn on the Night Mode “Do Not Disturb” setting, which appears to set the volume directly to zero).

This means that you can specify a Night Mode activation time — with volume set to minimum — when you want your fake “sleep timer” to shut down the audio. The stream will keep playing — using data of course — until the set Night Mode termination time or until you manually (e.g., by voice command) set a higher volume level (for example, in the morning). Then you can manually stop the stream if it’s still playing at that point.

Yep, a hack, but it works. And it’s the closest we’ve gotten to a real sleep timer on Google Home so far.

Feel free to contact me if you need more information about this.

–Lauren–