Explaining the Chromebook Security Scare in Plain English: Don’t Panic!


Yesterday I pushed out to various of my venues a Google notice regarding a security vulnerability relating to a long list of Chrome OS based devices (that is, “CrOS” on Chromebooks and Chromeboxes). That notice (which is titled more like a firmware upgrade advisory than a security warning per se) is at:

https://sites.google.com/a/chromium.org/dev/chromium-os/tpm_firmware_update

While that page is generally very well written, it is still quite technical in its language. Unfortunately, while I thought it was important yesterday to disseminate it as quickly as possible, I was not in a position to write any significant additional commentary to accompany those postings at that time. 

Today my inbox is filled with concerned queries from Chromebook and Chromebox users regarding this issue, who found that Google page to be relatively opaque.

Does this bug apply to us? Should we rush to upgrade? What happens if something goes wrong? Should our school be concerned — we’ve got lots of students using Chromebooks, what should we do? Help!

Here’s the executive summary — perhaps the way that Google should have said it: DON’T PANIC! — especially if you have strong passwords. Most of you don’t really have to worry much about this one. But please do keep reading, especially and definitely if you’re a corporate user or someone else in a particularly high security environment.

This is not a large-scale attack vulnerability, where millions of devices can be easily compromised. In fact, even in worst case scenarios, the attack is computationally “expensive” — meaning that much more “targeted” attacks, e.g., against perceived “high-value” individuals, would be the focus.

Google has already take steps in their routine Chrome OS updates to mitigate some aspects of this problem and to make it an even less practical attack from the standpoint of most individual users, though the vulnerability cannot be completely closed via this approach for everyone.

The underlying problem is a flaw in the firmware (the programming) of a specific chip in these devices, called a TPM. Google didn’t expand that acronym in their notice, so I will — it stands for Trusted Platform Module.

The TPM is a crucial part of the cryptographic system that protects the data on Chrome OS devices. It’s sort of the “roach motel” of security chips — certain important crypto key data gets in there but can’t get out (yet can still be utilized appropriately by the system).

The TPM firmware flaw in question makes the possibility of “brute force” guessing of internal crypto keys more practical in a targeted sense, but again, not at large scale. And in fact, if you have a weak password, that’s a far greater vulnerability for most users than this TPM bug ever would have been. Google’s mitigations of this problem already provide good protection for most individual users with strong passwords.

C’mon, switch to a strong password already! You’ll sleep better.

It’s really in high security corporate environments and similar situations where the TPM flaw is of more concern, particularly where individual users may be reasonably expected to be targets of security attacks.

Where firms or other organizations are using their own crypto certificates via the TPM to allow corporate or other access (or use “Verified Access” for enterprise-managed authentication) the TPM bug is definitely worthy of quite serious consideration at least.

Ordinary users can upgrade their TPM firmware if they wish (in enterprise-managed environments, you will likely need administrative permission to perform this). The procedure uses the “powerwash” function of the devices, as explained on the Google page.

But as also noted there, this is not a risk-free procedure. Powerwash wipes all user data from the device, and devices can fail to boot if things go wrong during the process. There are usually ways to recover even from that eventuality, but you probably don’t want to be in that position if you can reasonably avoid it.

For the record, I am personally not upgrading the TPM firmware on the Chrome OS devices that I use or manage at this time. They all have decent passwords, and especially for remote users I won’t risk the powerwash sequence for now.

I am of course monitoring the situation and will re-evaluate as necessary. Google is working on a way to update the TPM firmware without a powerwash — if that comes to pass it will significantly change the equation. And of course if I had to use any of these devices in an environment where TPM-based crypto certificates were required, I’d consider a powerwash for TPM firmware upgrade to be a mandatory prerequisite.

In the meantime, be aware of the situation, think about it, but once again, don’t panic!

–Lauren–

Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–

Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home


UPDATE (October 17, 2017): Google Home, nearly a year after its initial release, finally has a real sleep timer! Some readers have speculated that this popular post that you’re viewing right here somehow “shamed” Google into final action on this. I wouldn’t go that far. But I’ll admit that it’s somewhat difficult to stop chuckling a bit right now. In any case, thanks to the Home team!

– – –

I’ve long been bitching about Google Home’s lack of a basic function that clock radios have had since at least the middle of the last century — the classic “sleep timer” for playing music until a specified time or until a specific interval has passed. I suspect my rants about this have become something of a chuckling point around Google by now.

Originally, sleep timer type commands weren’t recognized at all by GH, but eventually it started admitting that the concept at least exists.

A somewhat inconvenient but seemingly serviceable way to fake a sleep timer is now possible with Google Home. I plead guilty, it’s a hack. But here we go.

Officially, GH still responds with “Sleep timer is not yet supported” when you give commands like “Stop playing in an hour.”

BUT, a new “Night Mode” has appeared in GH firmware, at least since revision 99351 (I’m in the preview program, you may or may not have that revision yet, or it may have appeared earlier in some cases).

This new mode — in the device settings reachable through the Home app — permits you to specify a maximum volume level during specified days and hours. While the description doesn’t say this explicitly, it turns out that this affects music streams as well as announcements (except for alarms and timers). And, you can set the maximum volume for this mode to zero (or turn on the Night Mode “Do Not Disturb” setting, which appears to set the volume directly to zero).

This means that you can specify a Night Mode activation time — with volume set to minimum — when you want your fake “sleep timer” to shut down the audio. The stream will keep playing — using data of course — until the set Night Mode termination time or until you manually (e.g., by voice command) set a higher volume level (for example, in the morning). Then you can manually stop the stream if it’s still playing at that point.

Yep, a hack, but it works. And it’s the closest we’ve gotten to a real sleep timer on Google Home so far.

Feel free to contact me if you need more information about this.

–Lauren–

Major Porn Site’s Accessibility Efforts Put Google to Shame


You just can’t make this stuff up. By now you’ve perhaps become somewhat weary of my frequent discussions of Google’s growing accessibility failures, as their site documentation, blogs, and user interfaces continue to devolve in ways that severely disadvantage persons with less than perfect vision or who have other special needs — a rapidly growing category of users that Google just doesn’t seem to consider worthy of their attention. Please see:  “How Google Risks Court Actions Under the ADA (Americans with Disabilities Act)” — https://lauren.vortex.com/2017/06/26/how-google-risks-court-actions-under-the-ada-americans-with-disabilities-act — and other posts linked therein.

Now comes word of a major site that really understands these issues — that really does appreciate these accessibility concerns and is moving in the correct direction, in sharp contrast (no pun intended) with Google.

Here’s the kicker — it’s a porn site — supposedly the planet’s largest “adult entertainment” site, in fact. While I’m not a user of the site myself, tech news publications have confirmed the details of the accessibility press release that Pornhub distributed a few days ago.

Pornhub has rolled out world-class accessibility options across its platform, including visual element changes, narrated videos, and a wide array of keyboard shortcuts. “Enhancing the ability to contrast colors or to flip the text color and the background color or things like that can be very helpful to people who have low vision, which means they’re legally and functionally blind but they have some vision left,” says Danielsen [of Pornhub]. “Maybe they’re not using text to speech or braille to read the site.”

Bingo. They get it. These are the kinds of options I’ve been urging Google to provide for ages for their desktop services, to no avail.

At first glance, one might wonder why the hell a pornography site would be able to figure this out while Google, compromising some of the smartest people on the planet, keeps moving in exactly the wrong direction when it comes to major accessibility concerns.

Perhaps the explanation is that Google is great at technology but not so great when it comes to understanding the needs of people who aren’t in their target demographics. 

On the other hand, a successful porn site must by definition understand what their users of all stripes want and need. Porn is very much a people-oriented product.

I’m still convinced that the great Googlers at Google can get this right, if they choose to do so and allocate sufficient resources to that end. 

You’re probably expecting some sort of pun to close this post. Accessibility is a serious issue, and when a porn site tops Google in this important area, that’s a matter for sober deliberation, not for screwing around. After all, sometimes a cigar is indeed just a cigar.

–Lauren–