A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly

This evening, a reader of my blog post from earlier this year (“YouTube’s Dangerous and Sickening Cesspool of ‘Prank’ and ‘Dare’ Videos” – https://lauren.vortex.com/2017/05/04/youtubes-dangerous-and-sickening-cesspool-of-prank-and-dare-videos), asked if I knew about YouTube’s “laxative” prank and dare videos. Mercifully, I didn’t know about them. Unfortunately, now I do. And while it’s all too easy to plow the fields of toilet humor when it comes to topics like this, it’s really not at all a funny subject.

In fact, it can be deadly.

Some months back I had heard about a boy who — on a dare — ate 25 laxative brownies in one hour. The result was near total heart and kidney failure. He survived, but just barely.

What I didn’t realize until today is that this was far from an isolated incident, and that there is a stunningly vast corpus of YouTube videos explicitly encouraging such dares — and even worse, subjecting innocent victims to “pranks” along very much the same lines.

Once I began to look into this category, I was shocked by its sheer scope.  For example, a YouTube search for:

laxative prank

currently yields me 132,000 results. Of those, over 2,000 were uploaded in the last month, over 300 in the last week, and 10 just today!

As usual, it’s difficult to know what percentage of these are fakes and which are real. But this really matters not, because virtually all of them have the effect of encouraging impressionable viewers into duplicating their disgusting and dangerous feats.

Many of these YouTube videos are very professionally and slickly produced, and often are on YouTube channels with very high subscriber counts. It also appears common for these channels to specialize in producing a virtually endless array of other similar videos in an obvious effort to generate a continuing income stream — which of course is shared with Google itself.

Is there any possible ethical justification for these videos being hosted by Google, and in many cases also being directly monetized?

No, there is not.

And this is but the tip of the iceberg.

YouTube is saturated with an enormous range of similarly disgusting and often dangerous rot, and the fact that Google continues to host this material provides a key continuing incentive for ever larger quantities of such content to be produced, making Google directly culpable in its spread.

I spent enough time consulting internally with Google to realize that there are indeed many situations where making value judgments regarding YouTube content can be extremely difficult, to say the least.

But many of these prank and dare videos aren’t close calls at all — they are outright dangerous and yes, potentially deadly. And as we’ve seen they are typically extremely easy to find.

The longer that these categories are permitted to fester on YouTube, the greater the risks to Google of ham-fisted government regulatory actions that frankly are likely to do more harm than good.

Google can do so much better than this.

–Lauren–

Perhaps the Best Feature Ever Comes to Chrome: Per Site Audio Muting!

UPDATE (January 25, 2018): This feature is now available in the standard, stable, non-beta version of Chrome!

– – –

Tired of sites that blare obnoxious audio at you from autoplay ads or other videos, often from background tabs, sometimes starting long after you’ve moved other tabs to the foreground? Aren’t these among the most disgustingly annoying of sites? Want to put them in their place at last?

Of course you do.

And as promised by Google some months ago, the new Chrome browser beta — I’m using “Version 64.0.3282.24 (Official Build) beta (64-bit)” on Ubuntu Linux — provides the means to achieve this laudable goal.

There are a number of ways to use this truly delightful new feature.

If you right click on the address bar padlock (or for unencrypted pages, usually an “i” icon), you may see a sound “enable/disable” link on the settings tab that appears, or you may need to click on “site settings” from that tab. In the former case, you can choose “allow” or “block” directly, in the latter case, you can do this from the “sound” entry on the full site settings page that appears.

There’s an easier way, too. Right click on the offensive site’s tab. You can choose “Mute site” or “Unmute site” from there. 

These mute selections are “sticky” — they will persist between invocations of the browser — exactly the behavior that we want.

You can also manually enter a list of sites to mute (and delete existing selections) at the internal address: 

chrome://settings/content/sound

And as a special bonus, considering enabling the longstanding “Tab audio muting UI control” experiment in Chrome on the page at the internal address:

chrome://flags

This lets you mute or unmute a specific tab by clicking on the tab “speaker” icon, without changing the underlying site mute status — perfect if you want to hear the audio for a specific video at a site that you normally want to keep firmly gagged. 

I have long been agitating for a site mute feature in Chrome — my great thanks to the Chrome team for this excellent implementation!

In due course we can expect this new capability to find its way from Chrome beta to stable, but for now if you’re running the latest beta version, you should be able to starting enjoying this right now.

You’re going to love it.

–Lauren–

Google Wisely Pauses Move to Impose Accessibility Restrictions

Last month, in “Google’s Extremely Shortsighted and Bizarre New Restrictions on Accessibility Services”  —https://lauren.vortex.com/2017/11/13/googles-extremely-shortsighted-and-bizarre-new-restrictions-on-accessibility-services — I was highly critical of Google’s move to restrict Android app accessibility services only to apps that were specifically helping disabled persons. 

Google’s actions were assumed to be aimed at preventing security problems that can result when these accessibility services are abused — but these services also implement critical functionalities to other well-behaved apps that cannot currently be provided to most Android users without the use of those services.

My summary statement in that post regarding this issue was:

“The determining factor shouldn’t be whether or not an app is using an accessibility service function within the specific definition of helping a particular class of users, but rather whether or not the app is behaving in an honest and trustworthy manner when it uses those functions.”

I’m pleased to report that Google is apparently now in the process of reevaluating their entire stance on this important matter. Developers have received a note from Google announcing that they are “pausing” their decision, and including this text:

“If you believe your app uses the Accessibility API for a responsible, innovative purpose that isn’t related to accessibility, please respond to this email and tell us more about how your app benefits users. This kind of feedback may be helpful to us as we complete our evaluation of accessibility services.”

Bingo. This is exactly the approach that Google should be taking to this situation, and I’m very glad to see that the negative public reactions to their earlier announcement have been taken to heart.

We’ll have to wait and see what Google’s final determinations are regarding this area, but my thanks to the Google teams involved for giving the feedback the serious consideration that it deserves.

–Lauren–

Risks of Google Home and Amazon Echo as 24/7 Bugs

One of the most frequent questions that I receive these days relates to the privacy of “smart speaker” devices such as Google Home, Amazon Echo, and other similar devices appearing from other firms. 

As these devices proliferate around us — driven by broad music libraries, powerful AI assistants, and a rapidly growing pantheon of additional capabilities — should we have privacy concerns?

Or more succinctly, should we worry about these “always on” microphones being subverted into 24/7 bugging devices?

The short and quick answer is yes. We do need to be concerned.

The full and more complete answer is decidedly more complicated and nuanced.

The foundational truth is fairly obvious — if you have microphones around, whether they’re in phones, voice-controlled televisions, webcams, or the rising category of smart speaker devices, the potential for bugging exists, with an obvious focus on Internet-connected devices.

Indeed, many years ago I began writing about the risks of cellphones being used as bugs, quite some time before it became known that law enforcement was using such techniques, and well before smartphone apps made some forms of cellphone bugging trivially simple.

And while I’m an enthusiastic user of Google Home devices (I try to avoid participating in the Amazon ecosystem in any way) the potential privacy issues with smart speakers have always been present — and how we deal with them going forward is crucial.

For more background, please see:

“Why Google Home Will Change the World” –https://lauren.vortex.com/2016/11/10/why-google-home-will-change-the-world

Since I’m most familiar with Google’s devices in this context, I will be using them for my discussion here, but the same sorts of issues apply to all microphone-enabled smart speaker products regardless of manufacturer.

There are essentially two categories of privacy concerns in this context.

The first is “accidental” bugging. That is, unintended collection of voice data, due to hardware and/or firmware errors or defects.

An example of this occurred with the recent release of Google’s Home Mini device. Some early units could potentially send a continuous stream of audio data to Google, rather than the intended behavior of only sending audio after the “hot word” phrase was detected locally on the unit (e.g. “Hey Google”).  The cause related to an infrequently used manual switch on the Mini, which Google quickly disabled with a firmware update.

Importantly, the Mini gave “clues” that something was wrong. The activity lights reportedly stayed on — indicating voice data being processed — and the recorded data showed up in user “My Activity” for users’ inspection (and/or deletion). For more regarding Google’s excellent My Activity system, please see:

“The Google Page That Google Haters Don’t Want You to Know About” –  https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

Cognizant of the privacy sensitivities surrounding microphones, smart speaker firms have taken proactive steps to try avoid problems. As I noted above, the normal model is to only send audio data to the cloud for processing after hearing the “hot word” phrase locally on the device. 

Also, these devices typically include a button or switch that users can employ to manually disable the microphones.

I’ll note here that Google lately took a step backwards in this specific respect. Until recently, you could mute the microphone by voice command, e.g., “OK Google, mute the microphone.” But now Google has disabled this voice command, with the devices replying that you must use the switch or button to disable the mic.

This is not a pro-privacy move. While I can understand Google wanting to avoid unintended microphone muting that would then require users to manually operate the control on the device to re-enable the mic, there are many situations where you need to quickly disable the mic (e.g. during phone calls, television programs, or other situations where Google Home is being discussed) to avoid false triggering when the hotword phrase happens to be mentioned. 

The correct way of dealing with this situation would be to make voice-operated microphone muting capability an option in the Google Home app. It can default to off, but users who prefer the ability to quickly mute the microphone by voice should be able to enable such an option.

So far we’ve been talking about accidental bugging. What about “purposeful” bugging?

Now it really starts to get complicated. 

My explicit assumption is that the major firms producing these devices and their supporting infrastructures would never willingly engage in purposeful bugging of their own accord. 

Unfortunately, in today’s world, that’s only one aspect of the equation.

Could these categories of devices (from any manufacturers) be hacked into being full-time bugs by third-parties unaffiliated with these firms? We have to assume that the answer in theory (and based on some early evidence) is yes, but we can also assume that these firms have made this possibility as unlikely as possible, and will continually work to make such attacks impractical.

Sad to say, of much more looming concern is governments going to these firms and ordering/threatening them into pushing special firmware to targeted devices (or perhaps to devices en masse) to enable bugging capabilities. In an age where an admitted racist, Nazi-sympathizing, criminal serial sexual predator resides in the White House and controls the USA law enforcement and intelligence agencies, we can’t take any possibilities off of the table. Google for one has a long and admirable history of resisting government attempts at overreach, but — as just one example — we don’t know how far the vile, lying creature in the Oval Office would be willing to go to achieve his evil ends.

Further complicating this analysis is a lack of basic public information about the hardware/firmware structure of these devices.

For example, is it possible in Google Home devices for firmware to be installed that would enable audio monitoring without blinking those activity lights? Could firmware changes keep the microphone active even if the manual disable button or switch has been triggered by the user, causing the device mic to appear disabled when it was really still enabled?

These are largely specific hardware/firmware design questions, and so far my attempts to obtain information about these aspects from Google have been unsuccessful.

If you were hoping for a brilliant, clear-cut, “This will solve all of these problems!” recommendation here, I’m afraid that I must disappoint you.

Beyond the obvious suggestion that the hardware of these devices should be designed so that “invisible bugging” potentials are minimized, and the even more obvious (but not very practical) suggestion of unplugging the units if and when you’re concerned (’cause let’s face it, the whole point is for them to be on the ready when you need them!), I don’t have any magic wand solutions to offer here. 

Ultimately, all of us — firms like Google and Amazon, their users, and the community at large — need to figure out where to draw the lines to achieve a reasonable balance between the vast positive potential of these devices and the very real potential risks that come with them as well.

Nobody said that this stuff was going to be easy. 

Be seeing you.

–Lauren–

In the Amazon vs. YouTube War, Google is Right — and Wrong

You’ve probably heard that there’s an escalating “YouTube War” between Amazon and Google, that has now led to Google cutting of users of Amazon’s Fire and Echo Show products from YouTube, leaving legions of confused and upset users in the wake.

I’m no fan of Amazon. I intensely dislike their predatory business practices and the way that they treat many of their workers. I studiously avoid buying from Amazon.

Google has a number of completely legitimate grievances with Amazon. The latter has refused to carry key Google products that compete with Amazon products, while still designing those Amazon devices to access Google services like YouTube. Amazon has also played fast and loose with the YouTube Terms of Service in a number of ways.

I can understand Google finally getting fed up with this kind of Amazon behavior. Google is absolutely right to be upset.

However, Google is wrong in the approach that they’ve taken to deal with these issues, and this may do them considerable ongoing damage, even long after the current dispute is settled.

Cutting those Amazon device users off from YouTube with essentially a “go access YouTube some other way” message is not buying any good will from those users — exactly the opposite, in fact.

These users aren’t concerned about Google’s marketing issues, they just want to see the programming that they bought their devices to access — and YouTube is a major part of that.

As the firm that’s cutting off these users from YouTube, it’s Google that will take the brunt of user anger, and the situation unnecessarily sows distrust about Google’s behavior in the future. This can impact users’ overall feelings about Google in negative ways that go far beyond YouTube.

Worse, this kind of situation is providing long-term ammunition to Google haters who are looking for any excuses to try bring antitrust or other unwarranted regulatory focus onto Google itself.

Essentially, Amazon laid a trap for Google in this instance, and Google walked right into it.

There is a much better approach available to Google for dealing with this.

Rather than cutting off those Amazon device users, permit them to continue accessing YouTube, but only after presentation of a brief interstitial very succinctly explaining Google’s grievances with Amazon. Rather than making enemies of those users, bring them around to an understanding of Google’s point of view.

But above all, don’t punish those Amazon users by cutting them off from YouTube as you’re doing now.

Your righteous battle is with Amazon. But those Amazon device users should be treated as your allies in this war, not as your enemies!

And that’s the truth.

–Lauren–

Google Agrees: It’s Time for More Humans Fighting YouTube Hate and Child Exploitation Videos

Regular readers of my missives have probably grown tired of my continuing series of posts relating to my concerns regarding particular categories of videos that have increasingly contaminated Google’s YouTube platform.

Very briefly: I’m one of YouTube’s biggest fans. I consider YT to be a wonder of the world, both technologically and in terms of vast swathes of its amazing entertainment and educational content. I would be horrified to see YouTube disappear from the face of the planet.

That said, you know that I’ve been expressing increasing concerns regarding extremist and other hate speech, child exploitation, and dangerous prank/dare videos that increasingly proliferate and persist on YouTube, often heavily monetized with ads.

I have never ascribed evil motives to Google in these regards. YouTube needs to bring in revenue both for its own operations and to pay creators — and the absolute scale of YouTube is almost unimaginably enormous.

At Google’s kind of scale, it’s completely understandable that Google has a strong institutional bias toward automated, algorithmic systems to deal with content of all sorts.

However, I have long argued that the changing shape of the Internet requires more humans to “ride herd” on those algorithms, to fill in the gaps where algorithms tend to slump, and to provide critical sanity checking. This is of course an expensive proposition, but my view has been that Google has the resources to do this, given the will to do so.

I’m pleased to report that Google/YouTube has announced major moves in exactly these sorts of directions that I have long recommended:

https://youtube.googleblog.com/2017/12/expanding-our-work-against-abuse-of-our.html

YouTube will hire *human* video reviewers to a total of over 10K in 2018, will expand  liaisons with outside expert groups and individuals, and will tighten advertising parameters (including more human curation), among other very positive steps.

At YouTube scale, successful execution of these plans will be anything but trivial, but as I’ve said about various issues, Google *can* do this!

My thanks to the YouTube teams, and especially to YouTube CEO Susan Wojcicki, for these very welcome moves that should help to assure a great future both for YouTube and its many users!

–Lauren–

Easy Access to SSL Certificate Information Is Returning to Google’s Chrome Browser


You may recall that back early this year I expressed concerns that direct, obvious access to SSL encryption security certificate information had been removed from Google’s Chrome browser:

“Here’s Where Google Hid the SSL Certificate Information That You May Need” – https://lauren.vortex.com/2017/01/28/heres-where-google-hid-the-ssl-certificate-information-you-may-need

As I noted then, there are frequent situations where it’s extremely useful to inspect the SSL certificate info, because the use of SSL (https: — that is, the mere presence of a “green padlock” on a connection) indicates that the connection is encrypted, but that’s all. The padlock alone does not render any sort of judgment regarding the authenticity or legitimacy of the site itself — but the details in an SSL cert can often provide useful insight into these sorts of aspects.

After the change to Chrome that I reported last January, it was not longer possible to easily obtain the certificate data by simply doing the obvious thing — clicking the green padlock and then an additional click to see the cert details. It was still possible to access that data, but doing so required manipulation of the browser “developers tools” panels which are (understandably) not obvious to most users.

I’m pleased to report that easy access to the SSL cert data via the green padlock icon is returning to Chrome. It is already present in the Linux beta version that I run, and would be expected to reach Chrome’s stable versions on all platforms in due course. 

With this feature in place, you simply click the green padlock icon and then click the obvious “Valid” link under the “Certificate” section at the top. The SSL cert data opens right up for quick and direct inspection. The version of Chrome that you’re running currently may not have this feature implemented quite yet, but it’s on the way.

My thanks to the Chrome team!

–Lauren–

Stupid Story Claiming Google Tracking — Plus the USA Healthcare Nightmare

I’ve been receiving many queries as to why this blog and my lists have been so quiet lately. I would have preferred to say nothing about this, but I don’t want anyone concerned that they’ve been dropped off a list or are otherwise being subjected to technical issues. The lists are OK, the servers are running for now, and there’s nothing associated amiss at your end.

The executive summary of what’s going on is that I’m not well — I’ll spare you the details — and it’s unclear what I can do about it, given the dismal, insane state of health insurance in this country, especially for persons like me who have to deal with the collapsed individual medical insurance market (that is, who don’t have an employer, and so don’t have employer provided medical insurance).

The GOP and Satan’s Sociopath in the Oval Office are working deliberately to destroy health insurance and ruin lives for the sake of enriching their uber-wealthy, vile brethren. But even without those deliberate efforts at sabotage, the healthcare system itself has already utterly collapsed for vast numbers of people without steady incomes and who are too young or don’t qualify for Medicare — which the GOP is also working to decimate. The holes in Obamacare/ACA are big enough to toss the moon through, creating horrific “Catch-22” nightmares for persons with very low income levels and who cannot reasonably see into the future to predict their next year’s income.

The upshot of all this is that I simply cannot physically keep up under these conditions, and these public venues will be very quiet until such a time, if ever, that the overall situation changes. Sorry about that, Chief.

Since I was sending this item out anyway, I wanted to mention one rather crazy tech story going around currently. Obviously there’s been any number of technology issues recently about which I’d ordinarily have said something — most of them depressing as usual.

But there’s one in the news now about Google that is just so stupid that it can make your head explode, a “Google is secretly tracking your phone” scare piece. 

And as usual, Google isn’t addressing it in ways that ordinary people can understand, so it’s continuing to spread, the haters are latching on, and folks have started calling me asking about it in panic. 

Sometimes I think that Google must have a sort of suicide complex, given the way that they watch again and again how these sorts of stories get out of control without Google providing explanations beyond quotes to the trade press. Newsflash! Most ordinary non-techies don’t read the trade press!

Yeah, I know, Google just hopes that by saying as little as possible that the stories will fade away. But it’s like picking up comic strips with Silly Putty (anyone else remember doing that?) — you can keep folding the images inward but eventually the entire mass of putty is a dark mass of ink.

You’d think that with so many opportunistic regulatory and political knives out to attack Google these days, Google would want to speak to these issues clearly in language that ordinary folks could understand, so that these persons aren’t continuously co-opted by the lies of Google haters. I’ve done what I could to explain these issues in writing and on radio, but as I’ve said before this should be Google’s job — providing authoritative and plain language explanations for these issues. It’s not something that Google should be relying on outsiders to do for them willy-nilly.

The latest story is a lot of ranting and gnashing of teeth over the fact that Android phones have apparently been sending mobile phone network cell IDs to Google. Not that Google did anything with them — they’ve been tossing them and are making changes so that they don’t get sent at all. The complaint seems to be that these were sent even if users opt-ed out of Google’s location services. 

But the whole point is that the cell IDs had nothing to do with Google location geo services, but are related to the basic network infrastructures required to get notifications to the phones. It’s basically the same situation as standard mobile text messages — you need to know where the phone is connected to the network at the moment to effectively contact the phone to send the user a text message or other notifications, or even an ordinary phone call!

In a response apparently aimed at the trade press, Google talked about MCC and MNC codes and related tech lingo that all mean pretty much NOTHING to most people who are hearing this “tracking” story. 

Let me put this into plain English.

If your cell phone is turned on, the cellular networks know where you are — usually to a good degree of accuracy these days even without GPS. That’s how they work. That’s how you receive calls and text messages. It’s a core functionality that has nothing to do with Google per se.

You know all those news stories you see about crooks who get caught through tracking of their cell phones via location info that authorities get from the cellular carriers? 

Have you ever thought to yourself, “Why don’t those morons just turn off their phones when they don’t want to be tracked?”

It’s not Google that you need to be worried about. They have powerful protections for user data, and are extremely, exceptionally strict about when authorities can obtain any of it. On the other hand, the cellular carriers have traditionally been glad to hand over largely any user data that authorities might request for virtually any reason, often on a “nod and a wink” basis. You want something to worry about? Don’t worry about Google, worry about those cellular carriers.

Nor do you need to be a crook to turn off your phone when you don’t even want the carriers to know where you are. You want to use local apps? Fine, instead of turning the phone off, disable the phone’s radios by activating the “Airplane Mode” that all smartphones have available. 

This is all of the writing that I can manage right now and will probably be all that I have to say here for an indeterminate period. I can’t guarantee timely or even any responses to queries, but I’ll try to keep this machinery running the best that I can under the circumstances.

The best to you and yours for the holiday weekend and for the entire holiday season.

Please take care.

–Lauren–

How the Internet Broke the Planet

I am not an optimistic person by nature. I’ve tended — pretty much through my entire life — to always be wary of how things could go wrong. In some ways, I’ve found this to be a useful skill — when writing code it’s important to cover the range of possible outcomes and error states, and properly provide for their handling in a program or app.

Then again, I’ve never been much fun at parties. When I went to parties. Which has been very infrequently.

Mostly, I’ve spent my adult life in front of computer screens of all sorts (and before that, various forms of teletypes, other teleprinters, and even the occasional 029 keypunch machine).

I started writing publicly in the early 70s at the Internet’s ancestor ARPANET site #1 at UCLA, often on the very early mailing lists like Human-Nets, MsgGroup, or SF-Lovers (yes, and Network-Hackers, too). I even monitored the notorious Wine-Tasters list — though not being much of a drinker I uncharacteristically didn’t have much to say there.

Back then there were no domains, so originally I was LAUREN@UCLA-ATS (the first host on ARPANET) and later LAUREN@UCLA-SECURITY as well.

Much of my writing from those days is still online or has been brought back online. Looking it over now, I find that while there are minor points I might change today, overall I’m still willing to stand by everything I’ve written, even from that distant past.

My pessimism was already coming through in some of those early texts. While many in the ARPANET community were convinced that The Network would bring about the demise of nationalities and the grand rising up of a borderless global world of peace and tranquility, I worried that once governments and politicians really started paying attention to what we were doing, they’d find ways to warp it to their own personal and political advantages, perhaps using our technology for new forms of mass censorship.

And I feared that if the kind of networking tech we had created ever found its way into the broader world, evil would ultimately be more effective at leveraging its power than good would be.

Years and decades went by, as I stared at a seemingly endless array of screens and no doubt typed millions of words.

So we come to today, and I’m still sitting here in L.A. — the city where I’ve always lived — and I see how the Internet has been fundamentally broken by evil forces only some of which I foresaw years ago.

Our wonderful technology has been hijacked by liars, Nazis, pedophiles and other sexual abusing politicians, and an array of other despicable persons who could only gladden the hearts of civilization’s worst tyrants.

Our work has been turned into tools for mass spying, mass censorship, political oppression, and the spreading of hateful lies and propaganda without end.

I have never claimed to be evenhanded or dispassionate when it came to my contributions to — and observations of — the Internet and its impact on the world at large.

Indeed the Net is a wonder of civilization, on par with the great inventions like the wheel, like the printing press, like penicillin. But much as nuclear fission can be used to kill cancer or decimate cities, the Internet has proven to be a quintessential tool that can be used for both good and evil, for glories of education and communications and the availability of information, but also for the depths of theft and extortion and hate.

The dark side seems to be winning out, so I won’t pull any punches here. 

I have enormous respect for Google. I have pretty much nothing but disdain for Facebook. My feelings about Twitter are somewhere in between. It’s difficult these days to feel much emotion at all about Microsoft one way or another.

None of these firms — or the other large Internet companies — are all good or all bad. But it doesn’t take rocket science (or computer science for that matter) to perceive how Google is about making honest information available, Facebook is about controlling information and exploiting users, and Twitter doesn’t seem to really care anymore one way or another, so  long as they can keep their wheels turning.

This is obviously something of an oversimplification. Perhaps you disagree with me — sometimes, now, or always — and of course that’s OK too.

But I do want you to know that I’ve always strived to offer my honest views, and to never arbitrarily nor irrationally take sides on an issue. If the result has been that at one time or another pretty much everyone has disagreed with something I’ve said — so be it. I make no apologies for the opinions that I’ve expressed, and I’ve expected no apologies in return.

In the scheme of things, the Internet is still a child, with a lifetime to date even shorter than that of we frail individual human animals. 

The future will with time reveal whether our work in this sphere is seen as a blessing or curse — or most likely as some complex brew of both — by generations yet to come. Some of you will see that future for yourselves, many of us will not.

Such is the way of the world — not only when it comes to technology, but in terms of virtually all human endeavors.

Take care, all.

–Lauren–

Google Maps’ New Buddhist “Swastika”

I’m already getting comments — including from Buddhists — suggesting that Google Maps’ new iconography tagging Buddhist temples with the ancient symbol that is perceived by most people today as a Nazi swastika is problematic at best, and is likely to be widely misinterpreted. I agree. I’m wondering if Google consulted with the Buddhist community before making this choice. If not, now is definitely the time to do so.

–Lauren–

UPDATE (November 16, 2017): Google tells me that they are restricting use of this symbol to areas like Japan “where it is understood” and are using a different symbol for localization in most other areas. I follow this reasoning, but it’s unclear that it avoids the problems with such a widely misunderstood symbol. For example, I’ve received concerns about this from Buddhists in Japan, who fear that the symbol will be “latched onto” by haters in other areas. And indeed, I’ve already been informed of “Nazi Japan” posts from the alt-right that cite this symbol. The underlying question is whether or not such a “hot button” symbol can really be restricted by localization into not being misunderstood in other areas and causing associated problems. That’s a call for Google to make, of course.

Google’s Extremely Shortsighted and Bizarre New Restrictions on Accessibility Services

UPDATE (December 8, 2017): Google Wisely Pauses Move to Impose Accessibility Restrictions

UPDATE (November 17, 2017): Thanks Google for working with LastPass on this issue! – Google details Autofill plans in Oreo as LastPass gets reprieve from accessibility removals

 – – –

My inbox has been filling today with questions regarding Google’s new warning to Android application developers that they will no longer be able to access Android accessibility service functions in their apps, unless they can demonstrate that those functions are specifically being used to help users with “disabilities” (a term not defined by Google in the warning).

Beyond the overall vagueness when it comes to what is meant by disabilities, this entire approach by Google seems utterly wrongheaded and misguided.

My assumption is that Google wants to try limit the use of accessibility functions on the theory that some of them might represent security risks of one sort or another in specific situations. 

If that’s actually the case — and we can have that discussion separately — then of course Google should disable those functions entirely — for all apps. After all, “preferentially” exposing disabled persons to security risks doesn’t make any sense.

But more to the point, these accessibility functions are frequently employed by widely used and completely legitimate apps that use these functionalities to provide key features that are not otherwise available under various versions of Android still in widespread deployment.

Google’s approach to this situation just doesn’t make sense. 

Let’s be logical about this.

If accessibility functions are too dangerous from security or other standpoints to potentially be used in all legitimate apps — including going beyond helping disabled persons per se — then they should not be permitted in any apps.

Conversely, if accessibility functions are safe enough to use for helping disabled persons using apps, then they should be safe enough to be used in any legitimate apps for any honest purposes.

The determining factor shouldn’t be whether or not an app is using an accessibility service function within the specific definition of helping a particular class of users, but rather whether or not the app is behaving in an honest and trustworthy manner when it uses those functions.

If a well-behaved app needs to use an accessibility service to provide an important function that doesn’t directly help disabled users, so what? There’s nothing magical about the term accessibility.

Apps functioning honestly that provide useful features should be encouraged. Bad apps should be blown out of the Google Play Store. It’s that simple, and Google is unnecessarily muddying up this distinction with their new restrictions.

I encourage Google to rethink their stance on this issue.

–Lauren–

T-Mobile’s Scammy New Online Payment System


Traditionally, one of the aspects of T-Mobile that subscribers have really liked is how quickly and easily they could pay their bills online. A few seconds was usually all that was needed, and it could always be done in a security-positive manner.

No more. T-Mobile has now taken their online payment system over to the dark side, using several well-known methods to try trick subscribers into taking actions that they probably don’t really want to take in most instances.

First, their fancy new JavaScript payment window completely breaks the Chrome browser autofill functions for providing credit card data securely. All credit card data must now be entered manually on that T-Mobile payment page.

One assumes that T-Mobile site designers are smart enough to test such major changes against the major browsers, so perhaps they’re doing this deliberately. But why?

There are clues.

For example, they’ve pre-checked the box for “saving this payment method.” That’s always a terrible policy — many users explicitly avoid saving payment data on individual sites subject to individual security lapses, and prefer to save that data securely in their browsers to be entered onto sites via autofill.

But if a firm’s goal is to encourage people to accept a default of saving a payment method on the site, breaking autofill is one way to do it, since filling out all of the credit card data every time is indeed a hassle.

There’s more. After you make your payment, T-Mobile now pushes you very hard to make it a recurring autopay payment from that payment method. The “accept” box is big and bright. The option to decline is small and lonely. Yeah, they really want you to turn on autopay, even if it means tricking you into doing it.

Wait! There’s still more! If you don’t have autopay turned on, T-mobile shows an alert, warning you that a line has been “suspended” from autopay and urging you to click and turn it back on. They say this irrespective of the fact that you never had autopay turned on for that line in the first place.

No, T-Mobile hasn’t broken any laws with any of this. But it’s all scammy at best and really the sort of behavior we’d expect from AT&T or Verizon, not from T-Mobile.

And that’s the truth.

–Lauren–

Facebook’s Staggeringly Stupid and Dangerous Plan to Fight Revenge Porn

I’m old enough to have seen a lot of seriously stupid ideas involving the Internet. But no matter how incredibly asinine, shortsighted, and nonsensical any given concept may be, there’s always room for somebody to come up with something new that drives the needle even further into the red zone of utterly moronic senselessness. And the happy gang over at Facebook has now pushed that poor needle so hard that it’s bent and quivering in total despair. 

Facebook’s new plan to fight the serious scourge of revenge porn is arguably the single most stupid — and dangerous — idea relating to the Internet that has ever spewed forth from a major commercial firm. 

It’s so insanely bad that at first I couldn’t believe that it was real — I assumed it was a satire or parody of some sort. Unfortunately, it’s all too real, and the sort of stuff that triggers an urge to bash your head into the wall in utter disbelief.

The major Internet firms typically now have mechanisms in place for individuals to report revenge porn photos for takedown from postings and search results. Google for example has a carefully thought out and completely appropriate procedure that targeted parties can follow in this regard to get such photos removed from search results. 

So what’s Facebook’s new plan? They want you to send Facebook your own naked photos even before they’ve been abused by anyone — even though they might never be abused by anyone!

No, I’m not kidding. Facebook’s twisted idea is to collect your personal naked and otherwise compromising sexually-related photos ahead of time, so just in case they’re used for revenge porn later, they can be prevented from showing up on Facebook. Whether or not it’s a great idea to have photos like that around in the first place is a different topic, but note that by definition we’re talking about photos already in your possession, not secret photos surreptitiously shot by your ex — which are much more likely to be the fodder for revenge porn.

Now, you don’t need to be a security or privacy expert, or a computer scientist, to see the gaping flaws in this creepy concept. 

No matter what the purported “promises” of privacy and security for the transmission of these photos and how they’d be handled at Facebook, they would create an enormous risk to the persons sending them if anything happened to go wrong. I won’t even list the voluminous possibilities for disaster in Facebook’s approach — most of them should be painfully obvious to pretty much everyone.

Facebook appears to be trying to expand into this realm from a methodology already used against child abuse photos, where such abuse photos already in circulation are “hashed” into digital “signatures” that can be matched if new attempts are made to post them. The major search and social media firms already use this mechanism quite successfully. 

But again, that involves child images that are typically already in public circulation and have already done significant damage.

In contrast, Facebook’s new plan involves soliciting nude photos that typically have never been in public circulation at all — well, at least before being sent in to Facebook for this plan, that is. 

Yes, Facebook will put photos at risk of abuse that otherwise likely would never have been abused!

Facebook wants your naked photos on the theory that holy smokes, maybe someday those photos might be abused and isn’t it grand that Facebook will take care of them for us in advance!

Is anybody with half a brain buying their spiel so far? 

Would there be technically practical ways to send photo-related data to Facebook that would avoid the obvious pitfalls of their plan? Yep, but Facebook has already shot them down.

For example, users could hash the photos using software on their own computers, then submit only those hashes to Facebook for potential signature matching — Facebook would never have the actual photos.

Or, users could submit “censored” versions of those photos to Facebook. In fact, when individuals request that Google remove revenge porn photos, Google explicitly urges them to use photo editing tools to black out the sensitive areas of the photos, before sending them to Google as part of the removal request — an utterly rational approach.

Facebook will have none of this. Facebook says that you must send them the uncensored photos with all the goodies intact. They claim that local hashing won’t work, because they need to have humans verify the original uncensored photos before they’re “blurred” for long-term storage. And they fear that allowing individuals to hash photos locally would subject the hashing algorithms to reverse engineering and exploitation.

Yeah, Facebook has an explanation for everything, but taken as a whole it makes no difference — the entire plan is garbage from the word go.

I don’t care how trusted and angelic the human reviewers of those uncensored submitted nude photos are supposed to be or what “protections” Facebook claims would be in place for those photos. Tiny cameras capable of copying photos from internal Facebook display screens could be anywhere. If human beings at Facebook ever have access to those original photos, you can bet your life that some of those photos are eventually going to leak from Facebook one way or another. You’ll always lose your money betting against human nature in this regard.

Facebook should immediately deep-six, bury, terminate, and otherwise cancel this ridiculous plan before someone gets hurt. And next time Facebook bros, how about doing some serious thinking about the collateral risks of your grand schemes before announcing them and ending up looking like such out-of-touch fools.

–Lauren–

3D Printed Wall Mount for the Full-Sized Google Home

Since the 3D printed wall mount for my Google Home Mini worked out quite nicely (details here), I went ahead yesterday and printed a different type of wall mount for my original Google Home (which is more suited for music listening given its larger and more elaborate speaker system — it even has decent bass response.)

Performance of the Google Home when mounted on the wall seems exemplary, both in terms of audio reproduction and the performance of its integral microphones. 

The surface of the mount meshes with the contours on the bottom of the Google Home unit, providing additional stability.

At the end of this post, I’ve included photos of the printed mount itself, the mount on the wall with Google Home installed, and a very brief video excerpt of the printing process. 

The model for this mount is from “westlow” at: https://www.thingiverse.com/thing:2426589 (I used the “V2” version).

As always, if you have any questions, please let me know. 

Be seeing you.

–Lauren–

(Please click images to enlarge.)

Some Background on 3D Printing Gadgets for the Google Home Mini

UPDATE (October 30, 2017): 3D Printed Wall Mount for the Full-Sized Google Home

– – –

Over on Google+ I recently posted several short items regarding a tiny plastic mount that I 3D printed a couple of days ago to hang my new Google Home Mini on my wall (see 2nd and 3rd photos below, for the actual model file please see: https://www.thingiverse.com/thing:2576121 by “Jakewk13”).

This virtually invisible wall mount is perfectly designed for the Mini and couldn’t be simpler. Technically, the Mini is upside down when you use this mount, but of course it works just fine. Thanks Google for sending me a Mini for my ongoing experiments!

I’ve since received quite a few queries about my printing facilities, such as they are.

So the 1st photo below shows my 3D printer setup. Yes, it looks like industrial gear from one of the “SAW” torture movies, but I like it that way. This is an extremely inexpensive arrangement, where I make up for the lack of expensive features with a fair degree of careful ongoing calibration and operational skill, but it serves me pretty well. I can’t emphasize enough how critical accurate calibration is with 3D printing, and there’s a significant learning curve involved.

The basic unit started as a very cheap Chinese clone printer kit that I built and mounted on that heavy board for stability. Then, hardware guy that I’ve always been, I started modifying. As is traditional, many of the additions and modifications were themselves printed on that printer. This includes the filament reel support brackets, calibration rods, filament guide, inductive sensor mount, and more. I installed an industrial inductive sensor at the forward left of the black extruder unit, to provide more precise Z-axis homing and to enable automatically adjusted print extrusion leveling.

I replaced the original cruddy firmware with a relatively recent Repetier dev build, which also enabled the various inductive sensor functions. I had to compile out the SD card support to make room for this build in my printer controller — but I never used the SD card on the printer (intended for standalone printing) anyway.

On the build platform, I use ordinary masking tape, that gets a thin coat of glue stick immediately after I put the tape down. The tape and glue can last for quite a few prints before needing replacement.

I mainly print PLA filament. I never touch ABS — it warps, its fumes smell awful and are highly toxic.

I almost always print at an extruder temperature of 205C and a bed temperature of 55C.

The printer is driven by Repetier Server which runs on 14.04 Ubuntu via Crouton running on an older CrOS Chromebook. I typically use Linux Cura for model slicing.

I know, it’s all laughably inexpensive and not at all fancy by most people’s standards, but it does the job for me when I want to hang a Google gadget on the wall or need the odd matter-antimatter injector guide servo nozzle in a hurry.

Yep, it really is the 21st century.

–Lauren–

(Please click images to enlarge.)

Understanding Google’s New Advanced Protection Program for Google Accounts


I’ve written many times about the importance of enabling 2-factor authentication on your Google accounts (and other accounts, where available) as a basic security measure, e.g. in “Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good” — https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi — and in other posts too numerous to list here.  

Given this history, I’ve now begun getting queries from readers regarding Google’s newly announced and very important “Advanced Protection Program” (APP) for Google accounts — most queries being variations on “Should I sign up for it?”

The APP description and “getting started” page is at:

https://landing.google.com/advancedprotection/

It’s a well designed page (except for the now usual atrocious low contrast Google text font) with lots of good information about this program. It really is a significant increase in security that ordinary users can choose to activate, and yes, it’s free (except for the cost of purchasing the required physical security keys, which are available from a variety of vendors).

But back to that question. Should you actually sign up for APP?

That depends.

For the vast majority of Google users, the answer is likely no, you probably don’t actually need it, given the additional operational restrictions that it imposes.

However, especially for high-profile users who are most likely to be subjected to specifically targeted account attacks, APP is pretty much exactly what you need, and will provide you with a level of account security typically unavailable to most (if any) users at other commercial sites.

Essentially, APP takes Google’s existing 2-factor paradigm and restricts it to only its highest security components. So while USB/Bluetooth security keys are the most secure option for conventional 2-factor use on Google accounts, other 2-factor options like SMS text messages (to name just one) continue to also be available. This provides maximum flexibility for most users, and minimizes the chances of their accidentally locking themselves out of their Google accounts.

APP requires the use of these security keys — the other options are no longer available. If you lose the keys, or can’t use them for some reason, you’ll need to use a special Google account recovery procedure that could take up to several days to complete — a rigorous process to assure that it’s really you trying to regain access to the account.

There are other security-conscious restrictions to your account as well if you enable APP. For example, third-party apps’ access to your account will be significantly restricted, preventing a range of situations where users might otherwise accidentally grant overly broad permissions from outside apps to Google accounts.

It’s important to remember that there do exist situations where you are likely to not be able to use security keys. Public computers (and ironically, computers in high security environments) often have unusable USB ports and have Bluetooth locked in a disabled mode. These can be important considerations for some users.

Cutting to the chase, Google’s standard 2-factor systems are usually going to be quite good enough for most users and offer maximum flexibility — of course only if you enable them — which, yeah, you really should have done by now!

But in special cases for particularly high-profile or otherwise vulnerable Google users, the Advanced Protection Program could be the proverbial godsend that’s exactly what you’ve been hoping for.

As always, feel free to contact me if you have any additional questions about this.

Be seeing you.

–Lauren–

Explaining the Chromebook Security Scare in Plain English: Don’t Panic!

Yesterday I pushed out to various of my venues a Google notice regarding a security vulnerability relating to a long list of Chrome OS based devices (that is, “CrOS” on Chromebooks and Chromeboxes). That notice (which is titled more like a firmware upgrade advisory than a security warning per se) is at:

https://sites.google.com/a/chromium.org/dev/chromium-os/tpm_firmware_update

While that page is generally very well written, it is still quite technical in its language. Unfortunately, while I thought it was important yesterday to disseminate it as quickly as possible, I was not in a position to write any significant additional commentary to accompany those postings at that time. 

Today my inbox is filled with concerned queries from Chromebook and Chromebox users regarding this issue, who found that Google page to be relatively opaque.

Does this bug apply to us? Should we rush to upgrade? What happens if something goes wrong? Should our school be concerned — we’ve got lots of students using Chromebooks, what should we do? Help!

Here’s the executive summary — perhaps the way that Google should have said it: DON’T PANIC! — especially if you have strong passwords. Most of you don’t really have to worry much about this one. But please do keep reading, especially and definitely if you’re a corporate user or someone else in a particularly high security environment.

This is not a large-scale attack vulnerability, where millions of devices can be easily compromised. In fact, even in worst case scenarios, the attack is computationally “expensive” — meaning that much more “targeted” attacks, e.g., against perceived “high-value” individuals, would be the focus.

Google has already take steps in their routine Chrome OS updates to mitigate some aspects of this problem and to make it an even less practical attack from the standpoint of most individual users, though the vulnerability cannot be completely closed via this approach for everyone.

The underlying problem is a flaw in the firmware (the programming) of a specific chip in these devices, called a TPM. Google didn’t expand that acronym in their notice, so I will — it stands for Trusted Platform Module.

The TPM is a crucial part of the cryptographic system that protects the data on Chrome OS devices. It’s sort of the “roach motel” of security chips — certain important crypto key data gets in there but can’t get out (yet can still be utilized appropriately by the system).

The TPM firmware flaw in question makes the possibility of “brute force” guessing of internal crypto keys more practical in a targeted sense, but again, not at large scale. And in fact, if you have a weak password, that’s a far greater vulnerability for most users than this TPM bug ever would have been. Google’s mitigations of this problem already provide good protection for most individual users with strong passwords.

C’mon, switch to a strong password already! You’ll sleep better.

It’s really in high security corporate environments and similar situations where the TPM flaw is of more concern, particularly where individual users may be reasonably expected to be targets of security attacks.

Where firms or other organizations are using their own crypto certificates via the TPM to allow corporate or other access (or use “Verified Access” for enterprise-managed authentication) the TPM bug is definitely worthy of quite serious consideration at least.

Ordinary users can upgrade their TPM firmware if they wish (in enterprise-managed environments, you will likely need administrative permission to perform this). The procedure uses the “powerwash” function of the devices, as explained on the Google page.

But as also noted there, this is not a risk-free procedure. Powerwash wipes all user data from the device, and devices can fail to boot if things go wrong during the process. There are usually ways to recover even from that eventuality, but you probably don’t want to be in that position if you can reasonably avoid it.

For the record, I am personally not upgrading the TPM firmware on the Chrome OS devices that I use or manage at this time. They all have decent passwords, and especially for remote users I won’t risk the powerwash sequence for now.

I am of course monitoring the situation and will re-evaluate as necessary. Google is working on a way to update the TPM firmware without a powerwash — if that comes to pass it will significantly change the equation. And of course if I had to use any of these devices in an environment where TPM-based crypto certificates were required, I’d consider a powerwash for TPM firmware upgrade to be a mandatory prerequisite.

In the meantime, be aware of the situation, think about it, but once again, don’t panic!

–Lauren–

Solving Google’s, Facebook’s, and Twitter’s Russian (and other) Ad Problems


I’m really not in a good mood right now and I didn’t need the phone call. But someone I know who monitors right-wing loonies called yesterday to tell me about plotting going on among those morons. The highlight was their apparent discussions of ways to falsely claim that the secretive Russian ad buys on major USA social media and search firms — so much in the the news right now and on the “mind” of Congress — were actually somehow orchestrated by Russian expatriate engineers and Russian-born executives now in this country. “Remember, Google co-founder Sergey Brin was born in Russia — there’s your proof!”, my caller reported as seeing highlighted as a discussion point for fabricating lying “false flag” conspiracy tales.

I thanked him, hung up, and went to lie down with a throbbing headache.

The realities of this situation — not just ad buys on these platforms that were surreptitiously financed by Putin’s minions, but abuse of “microtargeting” ad systems by USA-based operations, are complicated enough without layering on globs of completely fabricated nonsense.

Well before the rise of online social media or search engines, physical mail advertisers and phone-based telemarketers had already become adept at using vast troves of data to ever more precisely target individuals, to sell merchandise, services, and ideas (“Vote for Proposition G!” — “Elect Harold Hill!”). There have long been firms that specialize in providing these kinds of targeted lists and associated data.

Internet-based systems supercharged these concepts with a massive dose of steroids.

Since the level of interaction granularity is so deep on major search and social media sites, the precision ad targeting opportunities become vastly greater, and potentially much more opaque to outside observers.

Historically, I believe it’s fair to assert that the ever-increasingly complex ad systems on these sites were initially built with selling “stuff” in mind — where stuff was usually physical objects or defined services.

Over time, the fraud prevention and other protections that evolved in these systems were quite reasonably oriented toward those kinds of traditional user “conversions” — e.g., did the user click the ad and ultimately buy the product or service?

Even as political ads began to appear on these systems, they tended to be (but certainly were not always) comparatively transparent in terms of who was paying for those ads, and the ads themselves were often aimed at explicit campaign fundraising or pushing specific candidates and votes.

The game changer came when political campaigns (and yes, the Russian government) realized that these same search and social media ad systems could be leveraged not only to sell services or products, or even specific votes, but rather to literally disseminate ideas — where no actual conversion — no actual purchase per se — was involved at all. Merely showing targeted ads to as many carefully targeted users as possible is the usual goal, though just blasting out an ad willy-nilly to as many users as possible is another (generally less effective) paradigm. 

And this is where we intersect the morass of fake news, fake ad buyers, fake stories, and the rest of this mess. The situation is made all the worse when you gain the technical ability to send completely different — even contradictory — contents to differently targeted users, who each only see what is “meant” for them. While traditional telemarketing and direct mail had already largely mastered this process within their own spheres of operations, it can be vastly more effective in modern online environments.

When simply displaying information is your immediate goal, when you’re willing to present content that’s misleading or outright lies, and when you’re willing to misrepresent your sources or even who you are, a perfect storm of evil intent is created.

To be clear, these firms’ social media and search ad platforms that have been gamed by evil are not themselves evil. Essentially, they’ve been hijacked by the Russians and by some domestic political players (studies suggest that both the right and left have engaged in this reprehensible behavior, but the right to a much greater and effective extent).

That these firms were slow to recognize the scope of these problems, and were initially rather naive in their understanding of these kinds of attacks, seems indisputable. 

But it’s ludicrous to suggest that these firms were knowing partners with the evil forces behind the onslaught of lying advertising that appeared via their platforms.

So where do we go from here?

Like various other difficult problems on the Web, my sense is that a combination of algorithms and human beings must be the way forward.

At the scale that these firms operate, ever-evolving algorithmic, machine-learning systems will always be required to do the heavy lifting.

But humans need a role as well, to act as the final arbiters in complex situations, and to provide “sanity checking” where required. (I discussed some aspects of this in: “Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem” – https://lauren.vortex.com/2017/10/05/vegas-horror-fixing-youtube-fake-news).

Specifically in the context of ads, an obvious necessary step would be to bring Internet political advertising (this will need to be carefully defined) into conformance with much the same kind of formal transparency rules under which various other forms of media already operate. This does not guarantee accurate self-identification by advertisers, but would be a significant step toward accountability.

But search and social media firms will need to go further. Essentially all ads on their platforms should have maximally practical transparency regarding who is paying to display them, so that users who see these ads (and third parties trying to evaluate the same ads) can better judge their origins and the veracity of those ads’ contents.

This is particularly crucial for “idea” advertising — as I discussed above — the ads that aren’t trying to “sell” a product or service, but that are purchased to try spread ideas — potentially including utterly false ones. This is where the vast majority of fake news, false propaganda, and outright lies have appeared in this context — a category that Russian government trolls apparently learned how to play like a concert violin.

This means more than simply saying “Ad paid for by Pottsylvania Freedom Fighters LLC.” It means providing tools — and firms like Google, Facebook, and Twitter should be working together on at least this aspect — to make it more practical to track down fake entities — for example, to learn that the fictional group in Fresno actually runs out of the Kremlin, or is really some shady racist, alt-right group.

On a parallel track, many of these ads should be blocked before they reach the eyeballs of platform users, and that’s where the mix of algorithms and human brains really comes into play. Facebook has recently announced that they will be manually reviewing submitted targeted ads that involve specific highly controversial topics. This seems like a good first step in theory, and we’ll be interested to see how well this works in practice.

Major firms’ online ad platforms will undoubtedly need significant and in some cases fairly major changes in order to flush out — and keep out — the evil contamination of our political process that has occurred.

But as the saying goes, forewarned is forearmed. We now know the nature of the disease. The path forward toward ad platforms resistant to such malevolent manipulations — and these platforms are crucial to the availability of services on which we all depend — is becoming clearer every day.

–Lauren–

Vegas Shooting Horror: Fixing YouTube’s Continuing Fake News Problem


In the wake of the horrific mass shooting in Las Vegas last Sunday, survivors, relatives, and observers in general were additionally horrified to see disgusting, evil, fake news videos quickly trending on YouTube, some rapidly accumulating vast numbers of views.

Falling squarely into the category of lying hate speech, these videos presented preposterous and hurtful allegations, including false claims of responsibility, faked video imagery, declarations that the attack was a “false flag” conspiracy, and similar disgusting nonsense.

At a time when the world was looking for accurate information, YouTube was trending this kind of bile to the top of related search results. I’ve received emails from Google users who report YouTube pushing links to some of those trending fake videos directly to their phones as notifications.

YouTube’s scale is enormous, and the vast rivers of video being uploaded into its systems every minute means that a reliance on automated algorithms is an absolute necessity in most cases. Public rumors now circulating suggest that Google is trying again to tune these mechanisms to help avoid pushing fake news into high trending visibility, perhaps by giving additional weight to generally authoritative news sources. This of course can present its own problems, since it might tend to exclude, for example, perfectly legitimate personal “eyewitness” videos of events that could be extremely useful if widely viewed as quickly as possible.

In the months since last March when I posted “What Google Needs to Do About YouTube Hate Speech” (https://lauren.vortex.com/2017/03/23/what-google-needs-to-do-about-youtube-hate-speech), Google has wisely taken steps to more strictly enforce its YouTube Terms of Service, particularly in respect to monetization and search visibility of such videos. 

However, it’s clear that there’s still much work for Google to do in this area, especially when it comes to trending videos (both generally and in specific search results) when major news events have occurred.

Despite Google’s admirable “machine learning” acumen, it’s difficult to see how the most serious of these situations can be appropriately handled without some human intervention.

It doesn’t take much deep thought or imagination to jot down a list of, let’s say, the top 50 controversial topics that are the most likely to suffer from relatively routine “contamination” of trending lists and results from fake news videos and other hate speech.

My own sense is that under normal circumstances, the “churn” at and near the top of some trending lists and results is relatively low. I’ve noted in past posts various instances of hate speech videos that have long lingered at the top of such lists and gathered very large view counts as a result.

I believe that the most highly ranked trending YouTube topics should be subject to ongoing human review on a frequent basis (appropriate review intervals to be determined). 

In the case of major news stories such as the Vegas massacre, related trending topics should be immediately and automatically frozen. No related changes to the high trending video results that preceded the event should be permitted in the immediate aftermath (and for some additional period as well) without human “sanity checking” and human authorization. If necessary, those trending lists and results should be immediately rolled back to remove any “fake news” videos that had quickly snuck in before “on-call” humans were notified to take charge.

By restricting this kind of human intervention to the most serious cases, scaling issues that might otherwise seem prohibitive should be manageable. We can assume that Google systems must already notify specified Googlers when hardware or software need immediate attention.

Much the same kind of priority-based paradigm should apply to quickly bring humans into the loop when major news events otherwise could trigger rapid degeneration of trending lists and results.

–Lauren–

How to Fake a Sleep Timer on Google Home


UPDATE (October 17, 2017): Google Home, nearly a year after its initial release, finally has a real sleep timer! Some readers have speculated that this popular post that you’re viewing right here somehow “shamed” Google into final action on this. I wouldn’t go that far. But I’ll admit that it’s somewhat difficult to stop chuckling a bit right now. In any case, thanks to the Home team!

– – –

I’ve long been bitching about Google Home’s lack of a basic function that clock radios have had since at least the middle of the last century — the classic “sleep timer” for playing music until a specified time or until a specific interval has passed. I suspect my rants about this have become something of a chuckling point around Google by now.

Originally, sleep timer type commands weren’t recognized at all by GH, but eventually it started admitting that the concept at least exists.

A somewhat inconvenient but seemingly serviceable way to fake a sleep timer is now possible with Google Home. I plead guilty, it’s a hack. But here we go.

Officially, GH still responds with “Sleep timer is not yet supported” when you give commands like “Stop playing in an hour.”

BUT, a new “Night Mode” has appeared in GH firmware, at least since revision 99351 (I’m in the preview program, you may or may not have that revision yet, or it may have appeared earlier in some cases).

This new mode — in the device settings reachable through the Home app — permits you to specify a maximum volume level during specified days and hours. While the description doesn’t say this explicitly, it turns out that this affects music streams as well as announcements (except for alarms and timers). And, you can set the maximum volume for this mode to zero (or turn on the Night Mode “Do Not Disturb” setting, which appears to set the volume directly to zero).

This means that you can specify a Night Mode activation time — with volume set to minimum — when you want your fake “sleep timer” to shut down the audio. The stream will keep playing — using data of course — until the set Night Mode termination time or until you manually (e.g., by voice command) set a higher volume level (for example, in the morning). Then you can manually stop the stream if it’s still playing at that point.

Yep, a hack, but it works. And it’s the closest we’ve gotten to a real sleep timer on Google Home so far.

Feel free to contact me if you need more information about this.

–Lauren–