Using Google’s Daydream VR Headset for Augmented Reality and Positional Tracking Applications

UPDATE (September 21, 2018): You can experiment with these capabilities using the procedures and apps available here: https://github.com/ChristophGeske/ARCoreInsideOutTrackingGearVr

– – –

When paired with suitable higher-end Google, Samsung, or various other brands of smartphones, the Google Daydream VR headset (currently in its second generation “2017” version, which is the version I’m discussing in this post) offers an extremely inexpensive path for “virtual reality” and other related experiences and experiments (the headset sometimes goes on sale for as little as $50).

In addition to of course being able to display Daydream-compatible VR apps, when a suitable Samsung phone is used it is also possible (via an interesting sequence of actions) to use many Oculus/Samsung Gear VR headset apps with the Daydream headset as well (feel free to contact me if you’re interested in the details on this).

At first glance (no pun intended) one would assume that Daydream headsets are unsuitable for “augmented reality” VR applications that require use of the phone camera, since the Daydream flap that holds the phone in place completely blocks the back of the phone and the camera lens.

This also seemingly eliminates the possibility of Daydream headset experimentation with “inside-out” 6DOF (six degrees of freedom) positional tracking applications, which could otherwise leverage the phone’s camera and Google’s “ARCore” platform to provide these capabilities that conventionally have only been available with far more expensive VR headsets.

We might consider cutting a hole through the rather thick flap of the headset (which also includes an integral heat sink — important when the flap is closed), but that’s messy at best, risks accidentally damaging embedded NFC tags, and is dependent on the exact position of the camera lens for any specific phone.

So here’s my alternative that requires zero modification of the Daydream headset itself, and only a few simple parts to achieve — an elastic strap to hold the phone in place with the flap of the headset left open and the phone camera lens exposed for use. The completed strap is simple to install or remove from the headset at any time, since the flap can be pulled outward to create a gap for this purpose.

To view a set of photos showing the assembly sequence and the finished design, please visit:

https://lauren.vortex.com/daydream-mods

I used a piece of elastic that already had a plastic catch on the end of suitable size to hold the elastic in place under the flap hinge. Alternatively almost anything of similar dimensions could be attached to a strip of elastic to achieve the same result.

You simply slide the completed assembly between the flap of the headset and the main part of the headset, strap in the phone, and you’re ready to go. I originally tested this using a metal washer, but decided that even wrapped in tape there was some risk of scratching the phone. A better protected metal washer would probably be fine. I printed up a custom-sized plastic washer to use instead.

The elastic holds the phone in place quite snugly, though with enough violent head motion it might be possible to force the phone to slide out from under the elastic. It should be straightforward to slip little barriers on the sides to avoid this, or simply avoid violent head motions! Also keep in mind that you don’t want to apply significant downward pressure to the open flap, since that would risk potentially breaking the plastic supports that keep it from falling further open.

Anyway, it’s really just the elastic, the washer, and several small cable ties!

OK, it’s a hack. No apologies.

If you have any questions, please let me know!

And of course, be seeing you.

–Lauren–

Google’s Lightning-Fast Response to My “Trusted Contacts” Concerns


Very recently I discussed my concerns regarding several issues related to Google’s “Trusted Contacts” service. Trusted Contacts permits users to send their current location data to other users as notifications.

The situation was triggered when I suddenly began receiving such location data notifications from somebody I’d never heard of in Africa. Address typos? Trying to attract my attention for some other reason? I dunno.

But stopping those emailed notifications was easier said than done, because it turned out that there was no way to do so from a web page, and the only available mechanism to block them was usable only via the Trusted Contacts smartphone app that needed to be installed, which required enabling of Location History which I don’t ordinarily use. After installing the app (which I had no personal interest in installing otherwise, and which of course a person without a suitable smartphone would not have been able to do) I was ultimately able to stop the notifications. Not a good user experience.

Since then, I’ve already been contacted directly by Google’s privacy and maps teams about these issues, and they’ve now implemented the means for users to easily unsubscribe from such notifications via a web page — without the need for installing an app. Other very useful changes related to the issues that I identified are apparently in the pipeline for availability.

My great thanks to the Google teams involved for so rapidly reaching out to me regarding these matters, and especially for the ultra-quick implementation of the web-based Trusted Contacts notifications unsubscribe tool that is now available to desktop users!

–Lauren–

Google Predictably Makes a Confused Jumble Out of New YouTube, Music Offerings

An old saying suggests that the only inevitabilities are death and taxes.  When it comes to Google, there are a couple more that we can add. Google will likely always have an array of often incompatible and overlapping “chat” programs and systems — and their paid video and music offerings will be a maze of twisty passages, all different.

Google hasn’t disappointed in that respect with the manner in which word has gotten out about their latest paid content changes. The one thing that seems clear is that the brand “YouTube Red” is apparently going away. But after that, everything is about as easy to understand as hieroglyphics prior to the discovery of the Rosetta Stone. 

YouTube Premium, YouTube Music. YouTube Music Premium. And what of Google Play Music (for free, purchased, and uploaded music) — which Google in their tweets (trying to calm down confused onlookers on Twitter) says is continuing for now?

I tried to figure it all out last night and got a terrible headache that forced me to quit. This morning, it’s all as clear as mud.

There are a couple of things that I’m fairly sure about. At the moment I’m in Google’s “family plan” for $15/month that gives me both YouTube Red and Google Play Music paid services for up to six accounts. I use it mainly for ad-free YouTube viewing and to be able to simultaneously stream different music to different Google Home devices without conflict, from music sources on Play Music and YouTube.

I’ve been led to believe that for existing subscribers of these services under their new names, there are no immediate price changes — though likely that’s coming down the line. It appears that obtaining the same mix of content under Google’s new plans will cost new subscribers more (though they may be able to lock in current prices for a time if they subscribe to the existing plans before the new plans launch reportedly next week).

But how much more will the new services cost going forward? Perhaps the Sphinx could figure it all out. I’ve seen so many different numbers and combinations of services now — not to mention that the future and form of Play Music still seems up in the air — that the only thing seeming certain is uncertainty itself.

I do know that for essentially the same paid mix of video and music content that I receive now from Google, I’d personally probably be willing to pay a wee bit more. But not much more and/or for a more limited set of content. In such latter eventualities, I’d be tempted to drop all of these Google paid content services entirely.

For the moment though, I think that I will sit tight for a bit, and wait for some sort of clarity to hopefully eventually shine its light on this current but predictable Google communications confusion.

Isn’t it nice to have a hobby?

–Lauren–

The Amazing 360 VR “Scoring the Last Jedi” Video


I just watched “Scoring The Last Jedi: a 360 VR Experience” – https://youtu.be/b32Q1nTgHM0 – for the first time, via a Google Daydream VR headset. It’s absolutely stunning, especially if you’re a lifetime fan of film scoring as I am.

Frankly, I was smiling like an idiot through the entire video. Put aside the flight simulators and the games for a moment — if you have the VR hardware to watch this baby (Google Cardboard will work fine too, if your smartphone has a good gyro), it demonstrates — better than anything else I’ve seen so far — what the potential is for VR to transport you almost physically to a different time and place.

Be warned, just watching this on YouTube without a VR headset is like an ant compared with a skyscraper. You really must see it in VR to properly experience this video.

As I’ve said previously, this kind of tech will ultimately either save civilization — or destroy it. It’s really that important.

Trust me on this.

–Lauren–

Warning: Dangerous Fake Emails About Google Privacy Changes

If you use much of anything Google, by now you’ve likely gotten at least one email from Google noting various privacy-related changes. They typically have the Subject:

Improvements to our Privacy Policy and Privacy Controls

and tend to arrive not from the expected simple “google.com” domain, but often from unusual-appearing Google subdomains, for example with addresses like:

privacy-noreply@www3.l.google.com

The notice also includes a bunch of links to various relevant privacy pages and/or systems at Google.

All of this is in advance of the effective date for the European Union’s “GDPR” laws. If you’re not familiar with the GDPR, it’s basically the latest hypocritical move by the EU on their relentless march toward dictating the control of personal data globally and to further their demands to become a global censorship czar — with the ability to demand the deletion of any search engine results around the world that they find inconvenient. Joseph Stalin would heartily approve.

One can assume that Google’s privacy team has been putting in yeoman’s service to meet the EU’s dictatorial demands, and it’s logical that Google decided to make other changes in their privacy ecosystem at the same time, and now is informing users about those changes.

Unfortunately, phishing crooks are apparently already taking advantage of this situation — in particular several aspects of these Google notification emails.

First, the legitimate Google privacy emails going out recently and currently are a veritable flood. It appears that Google is sending these out to virtually every email address ever associated with any Google account since perhaps the dawn of time. I’ve already received approximately 1.3E9 of them. OK, not really that many, but it FEELS like that many.

Some of these are coming in to addresses that I don’t even recognize. This morning one showed up to such a strange address that I had to go digging in my alias databases to figure out what it actually was. It turned out to be so ancient that cobwebs flew out of my screen at me when I accessed its database entry.

Seriously, these are one hell of a lot of emails, and the fact that they may come from somewhat unusual looking google subdomains plus include links has made them fodder for the crooks.

You can guess what’s happening. Phishing and other criminal types are sending out fraudulent emails that superficially appear to be the same as these legit Google privacy policy notification emails. Of course, some or all of the links in the phishing emails lead not to Google but to various evil traps and personal data stealing tricks.

So please, be extraordinarily careful when you receive what appear to be these privacy notices from Google. With so many real ones going out — with multiples often ending up at the same individual via various redirects and forwarding addresses — it’s easy for fake versions to slip in among the real ones, and clicking on the links in the crooked ones or opening attachments that they include can seriously ruin your day, to say the very least.

Take care, all.

–Lauren–

Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

We find ourselves at a “teachable moment” in the history of Artificial Intelligence — we should not squander this opportunity. The global controversy that erupted over the last few days regarding Google’s AI-based  “Duplex” phone calling system can be viewed as a harbinger of things to come if a holistic approach to AI is not a fundamental design factor from the ground up.

The Duplex controversy should be calmed down at least for the moment. Google has now announced that calls made by Duplex will be identified as such to the called party, exactly what I had urged in: “Calls From Google’s ‘Duplex’ System Should Include Initial Warning Announcements” (https://lauren.vortex.com/2018/05/09/calls-from-googles-duplex-system-should-include-initial-warning-announcements).

While there are some observers asserting that Duplex-type technology should be banned — or required to use a “robotic” sounding voice not easily confused with a human — I consider both of those suggestions to be extreme, unnecessary, and counterproductive. This kind of technology can have a range of positive applications. I am very much a supporter of AI research (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all).

We want the voices to be as humanlike as possible to be as understandable as possible. Full disclosure that the calls are from AI-based Assistants is completely adequate to assuage most related concerns, though how this tech might potentially be abused by users in ways that makes the robocalling flood even worse is still an open question. 

How did we get here? Basically, while Google’s blog post regarding Duplex made a vague mention of transparency, their demos of the system played at Google I/O appeared to show called parties with absolutely no idea that they were talking to an AI. That’s mightily impressive as a showcase for Google’s AI advances.

But it was also immediately obvious to me — as soon as I heard those demos — that this was going to quickly blow up into a big, public mess that could have been easily avoided.  Because there was bound to be an emotional, visceral, negative reaction by many observers to hear a human “manipulated” in that manner by an AI system. It strikes to very heart of many persons’ fears of “intelligent” machines — ranging from sci-fi fantasies to legitimate real world concerns.

All Google needed to do to avoid this entire PR nightmare was to announce at I/O that the system would disclose itself to called parties, and play demos that included such disclosures.

Why didn’t Google do this? I don’t have any inside scoop, but I can make some fairly informed guesses.

Google still considers Duplex to be an experiment. That’s a valid point, but only takes us so far. If you’re only showing the tech internally, that one thing. But as soon as you make a public demo, you’ve lit the fuse, so to speak.

Several reporters who called me about this story asked me variations of the same question: “How come Google didn’t realize how much negative reaction there would be to those demos?” 

And my answer is that while Google’s heart is almost always in the right place when it comes technologies, sometimes they get a bit, shall we say, “overenthusiastic” — which can make it more difficult to anticipate the non-engineering aspects of a system and how it will be perceived.

This is not just a Google issue. It’s a endemic problem at many engineering-oriented firms, and as a techie myself I’ve had to push back sometimes against my own overly enthusiastic feelings about some new technologies.

Over the years, I’ve numerous times been in the position of trying to point out these kinds of problems on the Net and elsewhere. It’s a tough sell. We engineering types can be tightly focused on solving engineering problems, and we don’t like being told that perhaps we should broaden our focus a bit. And then there’s the classic Silicon Valley attitude of pushing ahead whenever possible and just apologizing later when things go wrong. 

I’m concerned that these are no longer viable strategies — or at least that they’re ever more risky ones in our toxic political environment, where regulators and politicians increasingly see reason and technology as enemies to attack for their own political gains.

Firms like Google and so many others should be subjecting their product launches to more than just engineering and privacy reviews. We’ve reached a stage where some sort of what we might call “ethical” reviews are needed as well and just as routinely. There are many different ways that these could be accomplished, and I won’t detail them here right now.

But without these kinds of formal review mechanisms to consider the ethical impacts of these technologies, the probability of public and political blowback against both potentially very beneficial AI and other socially positive tech projects will be dramatically increasing.

–Lauren–

Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 10, 2018): Reacting to the controversies surrounding Duplex, Google today announced that the system will identify itself to called parties. I applaud this decision.

– – –

I’ve already received many reactions to my posting yesterday of “Google’s New ‘Duplex’ Voice System Creates Some Troublesome Questions” (https://lauren.vortex.com/2018/05/08/googles-new-duplex-voice-system-creates-some-troublesome-questions) — discussing Google’s new system capable of conducting telephone conversations that may be indistinguishable from human callers. This is fascinating work on Google’s part and a potentially very positive use of their AI assets and expertise, but as I noted also carries significant risks for confusion and potential abuse.

With no exceptions so far, the sense of these reactions has confirmed what I suspected — that people are just fine with talking to automated systems so long as they are aware of the fact that they are not talking to another person. They react viscerally and negatively to the concept of machine-based systems that have the effect (whether intended or not) of fooling them into believing that a human is at the other end of the line.  To use the vernacular: “Don’t try to con me, bro!”

Luckily, there’s a relatively simple way to fix this problem at this early stage — well before it becomes a big issue impacting many lives.

I believe that all production environment calls (essentially, calls not being made for internal test purposes) from Google’s Duplex system should be required by Google to include an initial verbal warning to the called party that they have been called by an automated system, not by a human being — the exact wording of that announcement to be determined.

I would also assert that any similar systems from other firms that can be easily confused with human callers provide similar start of call announcements.

I’d enormously prefer that all affected firms agree to provide these warnings voluntarily, but would support carefully drafted legislation or other regulatory actions to require their use if a voluntary approach proves to be insufficient.

Getting ahead of the potential pitfalls in this technology is crucial to help assure that these systems are widely accepted and able to achieve their intended beneficial results, rather than turning into a painful policy flashpoint down the line.

–Lauren–

Google’s New “Duplex” Voice System Creates Some Troublesome Questions

UPDATE: (May 11, 2018): Teachable Moment: How Outrage Over Google’s AI “Duplex” Could Have Been Avoided

UPDATE (May 9, 2018): Calls From Google’s “Duplex” System Should Include Initial Warning Announcements

– – –

Google today demonstrated a project of theirs under development called “Duplex” — which permits their Google Assistant infrastructure to conduct unassisted two-way voice phone calls for tasks like making appointments. In their demos, employees at the firms being called apparently had no idea that they were talking to a machine. Very impressive, indeed.

But though I’m personally a fan of Assistant — I have several Assistant “Home” devices myself, plus Assistant on my phones — something about those demos today made me immediately uneasy. When I mentioned this on my mailing lists during the day, I was surprised by how many people responded with variations of “I’d be upset if I was conned into thinking that a real person was calling me when it was really a computer.”

And yeah, it would bug me too. I’m not completely sure why. I suspect it’s an aspect of human nature — and Google tends to occasionally have something of a blind spot in that respect. My guess is that most people don’t much mind talking to machines so long as they know that they’re talking to machines. But Duplex even throws in natural sounding “uh” utterances and such — technically perhaps sometimes to cover processing delays, but there’s no denying that humans would tend to perceive these as direct attempts to convince them that a human was at the other end of the line.

There was something else. I knew that I had seen something like this before, in an old sci-fi movie. I couldn’t find references on Google that matched, so I had to use my own unaided brain for the lookup. I finally got it.

In the 1977 film “Demon Seed” a woman is held captive in her own home by an advanced AI system created by her estranged husband, and in control of all the smart windows, doors, and other appliances of the house. Her husband isn’t present. He has no idea that this is happening. At various points in the plot, this system makes video phone calls and interacts with persons at the front door via an intercom, in all cases convincing them that they were actually speaking with the woman herself and that all was well.

Leaving aside the sci-fi melodrama, there are some interesting questions here. Is it ethical for computerized systems to interact with humans in a manner that pretends to be human? Even if it’s ethical, is it desirable? What are the possible blowbacks and undesirable possibilities?

Perhaps we can ask the same sort of question traditionally asked of undercover cops: “Are you a police officer?” In our case, we wonder how the system would respond if the called party asked Duplex “Are you a human being?”

Of course it’s relatively early in the development of this tech. But perhaps not too early to begin thinking about these questions. Google itself suggested that an appropriate level of “transparency” would be needed for these systems — I wholeheartedly agree. But what does that actually mean? Perhaps an announcement at the start of each call informing the called party that there were talking to an automated system? The various implications are decidedly nontrivial.

And we must also be thinking about how such systems could be purposely misused. Obviously not in the dramatic manner of that film I mentioned above, but in other ways that involve scams and frauds of various kinds, perhaps as a sort of evil evolution of the current robocalling scourge.

Yet technology itself is never actually good or evil — it’s how we choose to use tech that sets these parameters. And there are all sorts of ways that Duplex could do a lot of good — if it is deployed in manners that help people without trying to fool them, and if it has sufficient safeguards to minimize the risks of abusive applications.

Google has their work cut out for them on this one, because Duplex is a quintessential example of an AI system where getting it working is only half the battle. The other half is assuring to the greatest degree possible that it’s only used for good purposes, and not in evil ways.

–Lauren–

Phishing Risks in Gmail’s New “Confidential Mode”


In my previous post — “New Gmail Features That May Be ILLEGAL for You to Use!” (https://lauren.vortex.com/2018/05/05/new-gmail-features-that-may-be-illegal-for-you-to-use), I noted that I’m very pleased overall with Google’s new Gmail, but also explained how using specific features of new Gmail — in particular “confidential mode” — may have negative legal implications for some users, especially those with legally-mandated data retention requirements.

But there’s another potential issue with “confidential mode” that could be a much broader risk to many more persons — the possibility that this mode will be leveraged by crooks for a new wave of phishing emails designed to spread malware and steal personal information.

The problem arises since non-Gmail users cannot directly receive Gmail confidential mode messages. Instead — as reported in the article that I linked to in my previous post — when a Gmail user wants to send a non-Gmail user such a message, the non-Gmail user is instead sent a link, that when clicked takes them to Google’s servers where they can read the confidential mode message in their browser.

The potential risks for any service that operates in this way are obvious.

Those of us working on Internet security and privacy have literally spent many years attempting to train users to avoid clicking on “to read the message, click here” links in emails that they receive. Criminals have simply become too adept at creating fraudulent emails that lead to phishing and malware sites.

Our efforts have had only limited success. Millions of busy users even click on such fake links in messages that are not even particularly “well crafted” or seem “obviously” fake as viewed by those of us who have long been dealing with such issues.

Any email system that depends on users clicking on emailed links to access remotely hosted email messages is likely to be subject to such criminal abuses. No matter how much diligence Google employs to try avoid users being fooled by phishing messages masquerading as links to Gmail confidential mode messages, we can be reasonably sure that this could ultimately represent a significant new vector of attack for phishing operations that will successfully ensnare large numbers of Internet users over time.

Given the relatively poor success that educational and message authentication efforts have had to date in these respects, there are no obvious solutions immediately apparent if Google operates their Gmail confidential mode in the manner described.

–Lauren–

New Gmail Features That May Be ILLEGAL for You to Use!

UPDATE (May 7, 2018): Phishing Risks in Gmail’s New “Confidential Mode”

– – –

I’m a fan of Google’s new Gmail — after switching over I’ve never felt the need to switch back to classic Gmail, not even once ( “My Initial Impressions of Google’s New Gmail User Interface” – https://lauren.vortex.com/2018/04/25/my-initial-impressions-of-googles-new-gmail-user-interface).

An aspect of the new Gmail that has been getting a lot of attention relates to the various features associated with Gmail’s new “confidential” mode, which includes mechanisms that can enable some user-specified limits (with various provisos regarding their effectiveness in assorted scenarios) on how Gmail messages are handled locally and by recipients, including a kind of “email expiration” system, among other features. A good summary of confidential mode functions is at: https://mashable.com/2018/04/27/new-gmail-expiring-emails-confidential-mode on the “Mashable” site.

The features of confidential mode come with various positive attributes and a number of limitations, but for some users may also be something else — they may be ILLEGAL for them to actually use!

It doesn’t take rocket science to see why.

Many people who use Gmail (and/or correspond with persons who use Gmail) work in fields or for organizations that have legally-mandated data retention requirements — and these often include all email communications.

And it’s important to note that we’re not talking only about the Gmail users themselves. A non-Gmail user receiving a confidential mode email from a Gmail user would apparently only receive a link to the actual message on Google’s servers — and that message could vanish later depending on the Gmail user’s settings (the recipients would have no control over this). Yeah, users could always screenshot or photograph these messages, but the legal implications of suddenly vanishing messages are not necessarily particularly subtle ones.

While Gmail users in most instances have always had the ability to delete their own copies of emails, the ability to suddenly withdraw access to a sent email by a recipient on both Gmail or non-Gmail systems will be new to most users, and the complex ramifications of using such features may not be obvious to many of them.

It is imperative that both users of regular Gmail and Google’s paid G Suite offerings are adequately warned by Google regarding the potential legal pitfalls of using confidential mode. I’d suggest “in your face” warnings at least the first time that users attempt to use associated features.

Many independent professionals who may be subject to legal restrictions in these contexts use regular Gmail. The situation is even more complicated with G Suite. Its users include both businesses large and small, and all manner of other organizations including significant numbers of government users — any of whom may be using their own domain names for G Suite emails.

It will be critical that G Suite users and account administrators aren’t only directly warned about the implications of using these confidential mode features, but that administrators also have the means to restrict the use of these features by any or all of the users associated with those accounts.

This is definitely an area where proactive educational and technical steps by Google now could potentially avoid a lot of hassle — or much worse — for a significant number of their users down the line.

–Lauren–

Confused Google Announcements Regarding Retirement of Google+ Apps?


I’m a heavy user of Google+. I’ve never been a Facebook user, and I’ve been on Google+ since the first day of beta availability back in 2011. So naturally many G+ users contact me when they have relevant concerns.

I’m now getting emails and other messages from concerned, alarmed, and possibly confused G+ users — and I’m seeing public posts and articles that may be similarly confused — about a series of emailed warnings that G is sending out about G+ apps. I’ve also received those emailed warnings from Google directly. There’s a lot of chatter that these messages are signaling a step toward killing off G+ (which given recent updates to G+ overall, seems unlikely to me at this time). The relevant portion of the emails is worded like this:

Effective May 20, 2018, Google+ for Android version 7.5 and older and Google+ for iOS version 5.0 and older will no longer work. Note that if you chose to have the photo backup setting enabled in Google+, your photos will no longer be backed up.

Many people are assuming that this means the G+ app will stop working for Android versions 7.5 and below. But this doesn’t make any sense to me. I’ve never even seen an Android version 7.5 system. And as of February this year, the breakdown of Android usage is reportedly around 81.2% for Android versions 5.0->7.1.x.

I’ve personally been assuming that these messages actually are referring to old versions of the G+ app, not of Android versions, but the wording is definitely unclear. I can’t believe that G actually would kill the G+ app for the vast majority of Android users.

There’s definitely growing public confusion about this, and clarification from Google — one way or another — would be greatly appreciated.

–Lauren–

Warning: New European Privacy Law Has Become a Jackpot for Internet Crooks


A quick but important warning. As you may know, a new misguided European Union privacy-related regulation — the GDPR (General Data Protection Regulation) — is coming into force.

This has triggered many websites sending out confusing notices about new GDPR rules, demands that users click-through URLs to accept the new regulations, and a range of other complicated and confusing notifications.

I’m now receiving reports of highly deceptive scam phishing emails that falsely purport to be these kinds of notifications, often including URLs “requiring” you to “accept” these regulations or make other changes on your accounts to supposedly avoid disruption of services.

The crooks have had plenty of time to prepare for this, so many of these scamming emails are extremely well designed. It can be nearly impossible to determine if some of them are legitimate or not, even with careful inspection of the email headers available to most users.

I strongly recommend that unless you are certain that an email asking or demanding that you take particular actions relating to new privacy rules is legitimate, that you do not click on any contained links, nor open any included attachments.

If the message appears to have come from an entity with whom you regularly deal — and remember that just because an email claims to come from a firm or other organization that you happen to work with, doesn’t mean that the message is actually from them! — you should consider contacting that firm or organization for verification via a known email address or known URL from your own bookmarks — or a URL that you type in yourself. Or call them on the phone, using their public phone numbers (not a phone number from the email). Again, do not click any links in those messages unless you are entirely convinced that they are not fakes!

What a shame that European regulators apparently didn’t feel it necessary to notify users about how easily the GDPR could become an entirely new vector for crime.

–Lauren–

How AI Could Save Us All

“And pray that there’s intelligent life somewhere up in space, ’cause there’s bugger all down here on Earth.” —
“The Galaxy Song” (“Monty Python’s The Meaning of Life” – 1983).

– – –

It’s very popular to trash “artificial intelligence” (AI) these days.

While reasoned warnings regarding how AI-based systems could be abused (and/or generate inappropriately “biased” decisions) are appropriate, various folks in the public eye — some of whom really should know better — have been proclaiming nightmare scenarios of AI relegating we mere humans to the status of pets, slaves, or worse — perhaps “batteries” as in “The Matrix” (1999). Or maybe just fertilizer for decorative displays.

There’s certainly a long history of cinematic representations of “intelligent” systems run amok. Earlier than “The Matrix” we saw a computer lethally hijack a space mission (“Hal” in “2001: A Space Odyssey” – 1968); another computer imprison, rape, and impregnate a woman (“Proteus” in “Demon Seed” – 1977); and a pair of computing systems take over the world (“Colossus” and “Guardian” in “Colossus: The Forbin Project” – 1970). And of course there’s the scorched Earth world of “Skynet” in “Terminator” (1984), and hybrid threats that may be even scarier, like the “Borg” from the “Star Trek” universe. And more, many more.

All of these cultural references have a real impact on how we think about AI today. We’re predisposed to be fearful of systems that we believe might be “smarter” in some ways than we are.

Monty Python may have been partly correct all along. In a world where a moronic creature like Donald Trump can be elected to the most powerful role on the planet, we should probably be seeking out intelligence to augment our own, wherever we can find it.

Seriously, since it could be a long, long time (if ever) before we hear from interstellar civilizations (and Stephen Hawking’s prediction that this might be a seriously losing proposition for humanity could indeed be accurate), we need to concentrate on intelligence augmentation systems that we can build ourselves.

The word “augmentation” is crucial here. The human brain is a marvel in so many creative and imaginative ways. But it’s easily overwhelmed by data, subject to disruptive distractions, and is ill-suited to solving critical planetary-scale problems on its own.

The key to a happy coexistence between humans and AI systems — even advanced AI systems — is to keep in sharp focus where we excel and where the AI systems that we develop can be most effectively and successfully deployed.

Two ways that we can get into trouble are by trying to use AI and “expert systems” as shortcuts to solve problems for which they aren’t actually suited, or by assuming that the data that we provide to these systems is always accurate and fair, when in some cases it’s actually biased and unfair (we’ve seen this problem already occur in some systems that attempt to predict criminal recidivism, for example). The computing adage “garbage in, garbage out” is still true today, just as it was in the ancient era of punched card computing.

Obviously, we don’t want to screw this up. There are real challenges and significant (but fascinating!) issues to be solved in the ongoing development and deployment of AI systems going forward, and in helping non-technical persons to better understand what these systems are really about and how they could actually improve their lives for the better.

And to the extent that we can concentrate on the real world of AI — and less on dramatic “doom and gloom” scenarios straight out of the movies — I believe that we’ll all be better off.

–Lauren–

Where I Stand on the Proposed Merger of T-Mobile and Sprint

UPDATED (May 26, 2018): REVOKING MY SUPPORT FOR THIS MERGER: With word yesterday that T-Mobile is paying duplicitous, lying fascists like former Trump campaign manager and current confidant Corey Lewandowski — and other members of the same consulting firm — for “how to kiss up to sociopathic, racist Donald Trump” advice, I hereby revoke my support for this merger. On its own terms, in an isolated universe, it makes sense. But if the cost of success for the merger is this kind of disgusting kowtowing and feeding of the beast, then the price is far too high. T-Mobile CEO John Legere has one hell of a lot to answer for on this one. ANYTHING for the merger, right John? The road to hell is paved with attitudes like yours.

– – –

Some proposed mergers are disasters for consumers. Back in 2011, AT&T tried to merge with T-Mobile, sending a chill down the spine of longtime T-Mobile subscribers like me (I’ve been with T-Mobile since the first day of Google Android availability with the original “G1” phone — now nearly 10 years ago). Twice before, I’d been unwillingly dragged into AT&T mobile services by mergers.

The proposed merger of AT&T and T-Mobile was abandoned when the Obama Justice Department wisely filed to block it.

In the years since, T-Mobile and Sprint have had an on-again, off-again courtship regarding a potential merger. Today they announced a definitive agreement to actually merge. Even under Trump, regulatory approval of the merger (which could take at least a year) is by no means guaranteed, since it would reduce the number of major mobile carriers in the USA from four to three.

I am, however, fairly sanguine about this merger proposal based on the descriptions I’ve seen this morning. The combined company will be firmly under T-Mobile’s control, with T-Mobile’s current CEO and COO retaining their positions, and the combined entity reportedly named — you guessed it — T-Mobile. Magenta for the win!

And frankly, at this stage of the game, I see this combined firm as being the most effective practical competition against the serious telecom bullies like AT&T, Verizon, Comcast, and Charter.

The devil is always in the details, but at least the potential for this merger ultimately being significantly consumer-positive seems to be in the cards.

We shall see.

–Lauren–

My Initial Impressions of Google’s New Gmail User Interface

Google launched general access to their first significant Gmail user interface (UI) redesign in many years today. It’s rolling out gradually — when it hits your account you’ll see a “Try the new Gmail” choice under the settings (“gear”) icon on the upper right of the page (you can also revert to the “classic” interface for now, via the same menu).

But you probably won’t need to revert. Google clearly didn’t want to screw up Gmail, and my initial impression is that they’ve succeeded by avoiding radical changes in the UI. I’ll bet that some casual Gmail users might not even immediately notice the differences.

This will all come as a great relief to many Gmail users, who have watched with increasing alarm the recent march of Google UIs toward low contrast designs that are difficult for many persons to read (e.g. as discussed in “Does Google Hate Old People?” – https://lauren.vortex.com/2017/02/06/does-google-hate-old-people).

I certainly won’t take credit for Gmail not falling into that kind of design trap, but perhaps Google has indeed been taking some previously stated concerns to heart.

The new Gmail UI is what we could call a “minimally disruptive” redesign of the now “classic” version. The overall design is not altered in major respects. So far I haven’t found any notable missing features, options, or settings. My impression is that the back end systems serving Gmail are largely unchanged. Additionally, there are a number of new features (some of which are familiar in design from Google’s “Inbox” email interface) that are now surfaced for the new Gmail.

Crucially, overall readability and usability (including contrast, font choices, UI selection elements, etc.) seem so close to classic Gmail (at least in my limited testing so far) as to make any differences essentially inconsequential. And it’s still possible to select a dark theme from settings if you wish, which results in even higher contrast.

So overall, my sense is that Google has done an excellent job with this interface refresh, and I’m hoping that the philosophy leading to this design — particularly in terms of user interface readability and ease of use — will carry over to other Google products and services going forward.

My kudos to the Gmail team!

–Lauren–

Google Reportedly Plans New Protections for YouTube Kids — Let’s Get Them Right!


Reports are circulating that Google plans to implement some important new protections for their YouTube Kids offering, in particular providing a means for parents to ensure that their children only see videos that have been human-curated and/or are from appropriately trusted YouTube channels.

The goal would be to avoid children being exposed to the kinds of sick garbage that currently still manages to seep into YouTube Kids recommendation engine suggested videos.

I have been calling for exactly this kind of approach for YouTube Kids, and I applaud such efforts by the YouTube team.

However, if some details of these reports are accurate, there are a couple of important provisos that I must mention.

First, the “curated/trusted” YouTube Kids video mode will supposedly be an opt-in feature — needing to be explicitly enabled (e.g., by parents).

By default, children would reportedly continue to see the algorithmic recommendations complete with the creepy contamination.

Since we’re dealing with kids viewing videos, not adults, this new human-curated mode should absolutely be the default, which could optionally be disabled by parents if they really wanted their children to see the full algorithmic flow.

The calculus when determining appropriate defaults is entirely different for children, and depending on busy parents to pay attention to these kinds of settings is problematic at best, so this is a situation where the most ethical and responsible action on Google’s part would be for the “safest” settings to prevail as defaults.

Secondly, it’s crucial in the long run that the same YouTube Kids features and content options are ultimately available not only as mobile apps but on ordinary browser platforms as well.  Most children don’t limit their video viewing only to phones!

All that said, if Google is indeed moving ahead toward human-curated and approved YouTube Kids video suggestions, this is a notably positive step, and would be an important acknowledgment by Google that in some cases, algorithms alone are insufficient to adequately deal with our complex online content ecosystems.

–Lauren–

How YouTube’s Ad Restrictions Have Gone Very Wrong

In the wake of the horrific shooting attack at YouTube headquarters, global attention has been drawn to Google’s content and monetization policies for YT, since the shooter apparently had a number of public grievances against YT in these regards (“Tragically, the YouTube Shooting Attack Is Not a Complete Surprise” – https://lauren.vortex.com/2018/04/04/tragically-the-youtube-shooting-attack-is-not-a-complete-surprise).

Part of what makes this all confusing is that Google’s recent series of YT policy changes — popularly called “Adpocalypse” — has included a number of different elements, some of which appear to have been much more appropriate than others.

The result is that many YT users who’ve been playing by the rules have been unfairly tossed into the dumpster along with the real abusers.

For example, I support Google’s moves to crack down (via demonetization and/or removal) on YT videos/channels that contain hate speech or other content that is clearly in violation of YT Terms of Service or Community Standards. In fact, I feel that Google has not gone far enough in some respects to deal with specific categories of violating, potentially dangerous content (“On YouTube, What Potentially Deadly Challenge Will Be Next?” – https://lauren.vortex.com/2018/04/02/on-youtube-what-potentially-deadly-challenge-will-be-next). I’ve also proposed techniques to help quickly detect truly abusive content (“Solving YouTube’s Abusive Content Problems — via Crowdsourcing” – https://lauren.vortex.com/2018/03/11/solving-youtubes-abusive-content-problems-via-crowdsourcing).

But along the way, Google made the misguided decision to drastically curtail which YT users could run ads to monetize their videos, essentially slapping “the little guys” in their faces. These users’ ads never brought in much money by Google standards, but every dollar counts to ordinary folks like you and me!

Why did Google do this? I suspect that they felt this to be a convenient time to shed the large number of small uploaders who didn’t bring in much revenue to Google. And conveniently, Google could argue (largely disingenuously, I believe)  that this was actually part of their broader anti-abuse efforts as well.

One can understand why Google would prefer not to bother evaluating small YT channels for terms compliance. But the reality is that the worst abusers often have among the largest YT followings — sometimes with millions of subscribers and/or large numbers of video views. 

By virtue of these very non-Googley and significantly draconian monetization restrictions applied to small, completely non-abusing YT channels and users, vast numbers of innocents are being condemned as if they were guilty. 

–Lauren–

Tragically, the YouTube Shooting Attack Is Not a Complete Surprise

I didn’t get much sleep last night. For many years I’ve feared the kind of attack that occurred at YouTube headquarters yesterday. Employees severely injured — the shooter dead by her own hand.

I’ve spent time looking over the attacker’s online materials — her website and available videos.

What’s immediately clear is that she had smoldering grievances against Google’s YouTube, that exploded yesterday in a rampage of innocent blood and her own self-destruction. Her father apparently knew that she “hated YouTube” — and had warned police that she might be headed there.

Google will no doubt bolster its physical security in the wake of this tragedy, but of course that merely pushes the zone of risk out to the perimeters of their secure areas.

Haunting me regarding the shooter’s online statements is that one way or another, I’ve seen or heard so much similar to them, so many times before.

For many years, Google and YouTube users have come to me in desperation when they felt that their problems or grievances were being ignored by Google. If you’ve been reading my posts for any significant length of time, you’ve seen me discussing these matters on numerous occasions.

The common thread in the stories that I hear from these users — usually by email, sometimes by phone — are feelings of frustration, of desperation, of an inability to communicate with Google — to get what they consider to be at least a “fair shake” from the firm when they have Google-related problems.

I’ve not infrequently pondered the possibility that one day, an upset, desperate Google user would become violent, potentially with deadly results especially given the flood of easily available firearms in this country.

YouTube related issues have typically been a big chunk of these user concerns brought to me, as have been Google account access issues generally. I’ve tried to help these users when I could, e.g., please see: “The Google Account  ‘Please Help Me!’ Flood” – https://lauren.vortex.com/2017/09/12/the-google-account-please-help-me-flood – and many other posts.

For well over a decade (most recently late last month) — both publicly and directly to Google — I’ve repeatedly urged the creation of Google “ombudsman” or similar roles, to provide more empowered escalation and internal policy analysis paths, and to help provide an “escape valve” for better dealing with the more serious user issues that arise. Just a couple of my related posts include:

“Why Big Tech Needs Big Ethics — Right Now!” – https://lauren.vortex.com/2018/03/24/why-big-tech-needs-big-ethics-right-now

“Google Needs an Ombudsman” Posts from 2009 — Still Relevant Today” – https://lauren.vortex.com/2017/04/03/google-needs-an-ombudsman-posts-from-2009-still-relevant-today

Google has always rejected such calls for ombudsmen or similar roles. Google has said that ombudsmen might have too much power (this definitely need not be the case — these roles can be defined in a wide variety of ways). Google has insisted that ombudsman concepts couldn’t scale adequately to their ecosystem (yet other firms with very large numbers of customers have managed to employ these concepts successfully for many decades).

The reality is that Google — filled to the brim with some of the smartest and most capable people on the planet — COULD make this work if they were willing to devote sufficient time and resources to structuring such roles appropriately.

Google’s communications with their users — along with related support and policy issues — have always collectively been Google’s Achilles’ heel.

While one would be reasonable to assume that the number of aggrieved Google users inclined to physically attack Google and Googlers is extremely limited, the fact remains that desperate people driven over the edge can be expected to sometimes take desperate actions. This is not by any means to excuse such horrific actions — but these are the facts.

Google and its services have become integral parts of people’s lives — in some cases more so than even their own families.

Google turns 20 this year. It’s time for Google to truly take responsibility for these issues and to grow up.

–Lauren–

On YouTube, What Potentially Deadly Challenge Will Be Next?

Sometimes I just can’t figure out Google. A great company. Great people. But on some issues they’re just so incredibly, even dangerously “tone deaf” to serious risks that persist on their platforms.

You’ve probably already gotten tired of my discussions regarding the dangerous prank-dare-challenge videos on YouTube, e.g. “A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly” – https://lauren.vortex.com/2017/12/17/youtube-prank-dare-vast-disgusting-potentially-deadly — and related posts.

So as if the dangerous “laxative prank” and Tide Pods Challenge and an array of other nightmarish YouTube-based efforts to achieve social media fame weren’t bad enough, we now are seeing a resurgence of the even more potentially disastrous “condom snorting” videos. If you haven’t heard of this one before, you probably shouldn’t investigate the topic shortly after eating.

The usual monsters of the Internet are already proclaiming this to be much ado about nothing, pointing out that it’s not a new phenomenon, even though it has suddenly achieved viral visibility again thanks mainly to YouTube. The usual sick statements like “let natural selection take its course” and “Darwin Award at work!” are also spewing from these child-hating trolls. 

I wonder how many impressionable youths seduced into sickness or even death by these categories of videos would be viewed as too many by these sick minds?  Five? Five hundred? 

NO! One is too many!

Because by and large, these videos shouldn’t exist on YouTube at all. 

But the trolls are right about one thing — many of these videos have been on YouTube for quite some time, gradually accumulating large numbers of views along the way. And when they suddenly “pop” and go viral, they’re like landmines that have finally exploded.

These videos clearly and absolutely violate Google’s YouTube Terms of Service by demonstrating unquestionably dangerous acts.

And they’re usually trivial to find via simple YouTube searches — often in vast quantities using obvious keywords.  Since I can find them — since kids can find them — Google could certainly find them, if it really wanted to.

Google has made significant strides toward demonetizing or eliminating various forms of hate speech from YouTube. But for some reason, they seem to continue dragging their collective feet in the category of dangerous challenge, dare, and prank videos.

Google can fix this. Google MUST fix this. There simply aren’t valid excuses for this continuing, dangerous pestilence that is contaminating YouTube — one of my favorite sites on the Net — and in the process providing governments around the planet with more excuses to push knee-jerk censorship that will harm us all.

C’mon Google, please get your ass in gear, and get that crap off of YouTube. 

No more excuses. Enough is enough.

–Lauren–

EU to Domain Owners in the UK: Drop Dead!


If there were ever any remaining questions about the cruel pettiness of European Union bureaucrats and politicians — as if their use of extortionist tactics against firms like Google, and the implementation of horrific global censorship regimes like “Right To Be Forgotten” weren’t enough — the latest chapter in EU infamy should eliminate any lingering doubts.

The European Commission has now issued an edict that the over 300 thousand UK-based businesses and other UK owners of dot-EU (.eu) domain names will be kicked off of their domains — and in many cases have their websites and businesses wrecked as a result — due to Brexit.

One might readily acknowledge that the UK’s pursuit of Brexit was a historically daft and self-destructive idea, but it took the EU to treat UK businesses caught in the middle as if they were victims from one of the torture-porn “SAW” movies. The more blood and pain the merrier, right gents?

The EU pronouncement is loaded with legalistic mumbo-jumbo, but is being widely interpreted as not only saying that UK entities can’t register or even renew existing dot-EU domains after about a year from now, but that perhaps even existing registrations might be terminated as of that date as well — apparently with no right of appeal.

There’s talk that there might be a small chance of negotiations to avert some of this. But the mere fact that the EC would issue such a statement — completely at odds with the way that domain transition issues have been routinely handled on the Internet for decades — gives us vast insight into the cosmic train wreck represented by increased European Union influence over Internet policies and operations.

Just when you begin to think that the EU can’t come up with an even worse way of wrecking the Net, they fool us once again with ever more awful new lows.

Congratulations!

–Lauren–