Explaining YouTube’s VERY Cool New Aspect Ratio Changes

YouTube very quietly made a very cool and rather major improvement in their desktop video players today. I noticed it immediately this morning and now have confirmation both from testing with my own YT videos (for which I know all the native metadata) and via informal statements from Google.

YouTube is now adjusting the YT player size to match videos’ native aspect ratios. This is a big deal, and very much welcome.

Despite the fact that I’m publicly critical from time to time regarding various elements of YouTube content-related policies, this does not detract from the fact that I’m one of YT’s biggest fans. I spend a lot of time in YT, and I consider it to be a news, information, education, and entertainment wonder of the world. Its scale is staggering large — so we can’t reasonably expect perfection — and frankly I don’t even want to think about life without YT.

Excuse me while I put on my video engineering hat for a moment …

One of the more complicated facets of video — played out continuously on YouTube — is aspect ratios. A modern high definition TV (HDTV) video is normally displayed at a 16 (horizontal) by 9 (vertical) aspect ratio – significantly wider than high. The older standard definition TV ratio is 4:3 — just a bit wider than high, and visually very close to the traditional 35mm film aspect ratio of 3:2.

When displaying video, the typical techniques to display different aspect ratios on different fixed ratio display systems have been to either modify the actual contents of the visible video frames themselves, or to fit more of the original frames into the display area by reducing their overall relative sizes and providing “fillers” for any remaining areas of the display. 

The “modification of contents” technique usually has the worst results. Techniques such as “pan and scan” were traditionally used to show only portions of widescreen movie frames on standard 4:3 TVs, simply cutting off much of the action. Ugh. 

But eventually, especially as 4:3 television screens became larger in many homes, the much superior “letterboxing” technique came into play, displaying black bars on the top and bottom of the screen to permit the entire (or at least most) of widescreen film frames to be displayed on a 4:3 cathode ray tube. In the early days of this process, it was common to see squiggles and such in those bars — networks and local stations were concerned that viewers would assume that something was wrong with their televisions if empty black bars were present without some sort of explanations — and sometimes broadcasters would even provide such explanations at the start of the film — sometimes they still do even with HDTV! Very widescreen films shown on 16:9 displays today may still use letterboxing to permit viewing more of the frame that would otherwise exceed the 16:9 ratio.

When 16:9 HDTV arrived, the opposite of the standard definition TV problem appeared. Now to properly display a traditional 4:3 standard TV image, you needed to put black bars on the right and left side of the screen — “pillarboxing” is the name usually given to this technique, and it’s widely used on many broadcast, satellite, streaming, and other video channels. It is in fact by far the preferred technique to display 4:3 content on a fixed aspect ratio 16:9 physical display.

After YouTube switched from a 4:3 video player to their standard 16:9 player years ago, you started seeing some YT uploaders zooming in on 4:3 videos to make them “fake” 16:9 videos before uploading, to fill the 16:9 player — resulting in grainy and noisy images, with significant portions of the original video chopped off. The same thing is done by some TV broadcasters and other video channels, documentary creators, and others who have this uncontrollable urge to fill the entire screen, no matter what! These drive me nuts.

Up until today, YouTube handled the display of native 4:3 videos by using the pillarbox technique within their 16:9 player. Completely reasonable, but of necessity wasting significant areas of the screen taken up by the black pillarbox bars.

This all changed this morning. The YouTube player now adapts to the native aspect ratio of the video being played, instead of always being a fixed 16:9. This means for example that a native 4:3 video now displays in a 4:3 player, with no pillarboxing required — and with significant viewable screen real estate recovered to actually display video rather than pillarboxing bars. In effect, these videos today are displaying similarly to how they would have in the early days of YT — fully filling the video display area — as had been the case before YT switched to fixed aspect ratio 16:9 players. Excellent! 

The same goes for other aspect ratios, in particular such as 16:9 used by most more recent videos, so 16:9 videos will continue to display in a 16:9 player.

One aspect (no pun intended) to keep in mind. The player will apparently adapt to the native video resolution as uploaded. So if a video was uploaded as 4:3, you’ll get a 4:3 player. But if (for example) a 4:3 video was already converted to 16:9 by pillarboxing before being uploaded, YouTube’s encoding pipeline is going to consider this to be a native 16:9 video and display it in a 16:9 player with the black bar pillars intact. Bottom line: If you have 4:3 material to upload, don’t change its aspect ratio, just upload it as native 4:3, pretty please!

Since I watch a fair bit of older videos on YouTube that tend to be in 4:3 aspect ratio, the changes YT made today are great for me. But having the YT player adjust to various native aspect ratios is going to be super for all YT users in the long run. It may take a little time for you to adapt to seeing the player size and shape vary from video to video, but you’ll get used to it. And trust me, you’ll come to love it.

Great work by YouTube. My thanks to the entire YouTube team!

–Lauren–

Uber and Lyft Must Immediately Ban “Peeping Tom” Drivers

In response to a news story revealing that an Uber driver has been (usually surreptitiously) live streaming video and most audio of his passengers without their knowledge or explicit consent — exposing them to ridicule and potentially much worse by his streaming audience, both Uber and Lyft have reportedly simply argued that the practice is legal in that particular (one-party recording permission) state. 

That kind of response is of course absolutely unacceptable and below reproach, demonstrating the utter lack of ethics of these ride sharing firms. They argue that this doesn’t even violate any of their driver terms.

That needs to change — IMMEDIATELY!

Drive sharing firms must ban their drivers from such behavior, and violators should be immediately excised from the platform.

That a vile behavior is legal does not mean that these firms — entrusted with the lives of millions of passengers — must permit drivers to engage in such activities. In fact, these firms already lay out specific “don’t do this!” rules that can prohibit a variety of legal activities by drivers — for the protection of their riders.

If these firms do not act immediately to end such practices by their drivers, they risk not only massive loss of rider trust, but are just begging for this kind of activity to eventually result in a horrific incident involving their passengers — perhaps physical abuse because identity information often leaks on these streams — at the hands of unscrupulous members of the live stream viewing public.

If these firms refuse to ban these practices, their rights to operate in any states where such behavior continues to occur must be withdrawn, and if necessary, legislation passed to force these firms to do the right thing and protect their riders from such abuses.

–Lauren–

EU’s Latest Massive Fine Against Google Will Hurt Europeans Most of All

Have you ever heard anyone seriously say: “Man, there just aren’t enough shopping choices on the Net!” or, “I’d really like my smartphone to be more complicated and less secure!” or … well, you get the idea — nobody actually means stuff like that.

But sadly, this means nothing to the politicians and bureaucrats of the European Union, who are constantly trying to enrich themselves with massive fines against firms like Google, while simultaneously making Internet life ever more “mommy state” government micromanaged for Europeans.

The latest giant fine (which Google quite righteously will appeal) announced by the EU extortion machine is five billion dollars, for claimed offenses by Google related to the Android operating system, all actually aspects of Android that are designed to help users and to provide a secure and thriving foundation for a wide range of applications and user choice.

In fact, in the final analysis, the changes in Android that the EU is demanding would result in much more complicated phones, less secure phones, and ultimately LESS choice for users resulting from alterations that will make life much more difficult (and expensive!) for application developers and users alike.

Why do the EU politicos keep behaving as if they want to destroy the Internet?

It’s because in significant ways that exactly what they have in mind. They don’t like an Internet that the government doesn’t tightly control, where they don’t dictate all aspects of how consumers interact with the Net and what these users are permitted to see and do. Even now, they’re still pushing horrific legislation to create a Chinese-style firewall to vastly limit what kind of content Europeans can upload to the Net, and to destroy businesses that depend on free inbound linking. And these hypocritical EU officials are desperately trying to prop up failing businesses whose business models are stuck in the 20th (or even 19th) centuries, while passing all the costs on to ordinary Europeans — who by and large seem to be quite happy with how the Internet is already working.

And of course, there’s the money. Need more money? Hell, the EU always needs more money. Gin up another set of fake violations against Google, then show up in Mountain View with sticky fingers extended for another multi-billion dollar check!

The EU has become a bigger threat to the Internet than even China or Russia, neither of which has attempted (so far) to extend globally their highly restrictive views of Internet freedoms. 

And the saddest part is that these kinds of abuses by the EU are hurting EU consumers most of all. Over time, fewer and fewer Internet firms will even want to deal with this kind of EU, and Europeans will find their actual choices more and more limited and government controlled as a result.

That’s a terrible shame for Europe — and for the entire world.

–Lauren–

Network Solutions and Cloudflare Supporting Horrific Racist Site

Today a concerned reader brought to my attention a horrifically racist site — apparently operating for a decade and currently registered via Network Solutions (NSI) — with DNS and other services through Cloudflare — called “n*ggermania.com” (and “n*ggermania.net”) — I have purposely not linked to them here, and you know why I have the asterisks there.

To call the site — complete with a discussion forum — a massive pile of horrific, dangerous, racist garbage of the worst kind would be treating it far too gently.

We already know that Cloudflare reportedly has an ethical sense that makes diseased maggots and cockroaches seem warm and friendly by comparison — Cloudflare apparently touts a content acceptance policy that Dr. Josef Mengele would have likely considered too extreme in its acceptance of monstrously evil content.

But Network Solutions claims to have higher standards (though it wouldn’t take much effort to beat Cloudflare in this regard) and I’m attempting to contact NSI officials now to determine if such racist sites are within their official policy standards. 

Oh, and by the way, guess what happens whenever you call the official Network Solutions listed phone number that they designate for “reporting abuse” — you get a recording (that doesn’t take a message) that says “they’re having difficulties — try again at a later time.”

Why are we not at all surprised?

–Lauren–

Chrome Is Hiding URL Details — and It’s Confusing People Already!

UPDATE (September 17, 2018): Google Backs Off on Unwise URL Hiding Scheme, but Only Temporarily

UPDATE (September 7, 2018): Here’s How to Disable Google Chrome’s Confusing New URL Hiding Scheme

– – –

Here we go again. I’m already getting upset queries from confused Chrome users about this one. 

In Google’s continuing efforts to “dumb down” the Internet, their Chrome browser (beta version, currently) is now hiding what it considers to be “irrelevant” parts of site address URLs.

This means for example that if you enter “vortex.com” and get redirected to “www.vortex.com” as is my policy (and a typical type of policy at a vast number of sites), Chrome will only display “vortex.com” as the current URL, confusing anyone and everyone who might have a need to quickly note the actual full address URL. Also removed are http: and https: prefixes, leaving even fewer indications when sites are secure — exactly the WRONG approach these days when users need more help in these respects, not less!

And of course, if you’re manually writing down a URL based on the shortened version, there’s no guarantee that it will actually work if entered directly back into Chrome without passing through possible site redirect sequences.

But wait! You said that you want additional confusion? By golly you’ve got it! If you click up in the address bar and copy the Chrome shortened URL, it will appear that you’re copying the short version, but you’re actually copying the invisible original version with the full site URL — including the full address and the http: or https: prefixes. If you double click up there, Chrome visibly replaces its mangled version with the full version.

I can just imagine how this “feature” pushed through Google — “Hell, our users don’t really need to see all that URL detail stuff, so we’ll just hide it all from them! They’ll never know the difference!”

But the truth is that from the standpoint of everyday users who glance quickly at addresses and greatly benefit from multiple signals to help them establish that they’ve reached the exact and correct sites in a secure manner, the new Chrome URL mangling feature is an abomination, and I’ll bet you dollars to donuts that crooked site operators will find some ways to leverage this change for their own benefits as well.

As I said, this is currently in Chrome Beta, which means it’s likely to “graduate” to Chrome Stable — the one that most people run — sometime fairly soon.

Google is a great company, but their ability to churn out unforced errors like this — that especially disadvantage busy, non-techie users — remains a particularly bizarre aspect of their culture.

–Lauren–

Third Parties Reading Your Gmail? Yeah, If You’ve Asked Them To!

Looks like the “Wall Street Journal” — pretty reliably anti-Google most of the time — is at it again. My inbox is flooded with messages from Google users concerned about the WSJ’s new article (being widely quoted across the Net) “exposing” the fact that third parties may have access to your Gmail.

Ooooh, scary! The horror! Well, actually not!

This one’s basically a nothingburger.

The breathless reporting on this topic is the “revelation” that if you’ve signed up with third-party apps and given them permission to access your Gmail, they — well, you know — have access to your Gmail! 

C’mon boys and girls, this isn’t rocket science. If you hire a secretary to go through your mail and list the important stuff for ya’, they’re going to be reading your mail. The same goes for these third-party apps that provide various value-added Gmail services to notify you about this, that, or the other. They have to read your Gmail to do what you want them to do! If you don’t want them reading your email, don’t sign up for them and don’t give them permission to access your Google account and Gmail! 

Part of the feigned outrage in this saga is the concern that in some cases actual human beings at these third-party firms may have been reading your email rather than only machines. Well golly, if they didn’t explicitly say that humans wouldn’t read them — remember that secretary? — why would one make such an assumption?

In fact, while it’s typical for the vast majority of such third-party systems to be fully automated, it wouldn’t be considered unusual for humans to read some emails for training purposes and/or to deal with exception conditions that the algorithms couldn’t handle. 

Seriously, if you’re going to sign up for third-party services like these — even though Google does carefully vet them — you should familiarize yourself with their Terms of Service if you’re going to be concerned about these kinds of issues.

Personally, I don’t give any third parties access to my Gmail. This simplifies my Gmail life considerably. Google has excellent internal controls on user data, and I fully trust Google to handle my data with care. Q.E.D.

And by the way, if you’ve lost track of third-party systems to which you may have granted access to your Gmail or other aspects of your Google account, there’s a simple way to check (and revoke access as desired) at the Google link:

https://myaccount.google.com/permissions

But really, if you don’t want third parties reading your Gmail, just don’t sign up with those third parties in the first place!

Be seeing you.

–Lauren–

Why Google Needs a “User Advocate at Large”

For many years I’ve been promoting the concept of an “ombudsman” to help act as an interface between Google and its user community. I won’t even bother listing the multitude of my related links here, they’re easy enough to find by — yeah, that’s right — Googling for them.

The idea has been to find a way for users — Google’s customers who are increasingly dependent on the firm’s services for an array of purposes (irrespective of whether or not they are “paying” users) — to have a genuine “seat at the table” when it comes to Google-related issues that affect them.

My ombudsmen concepts have consistently hit a figurative brick wall at the Googleplex. A concave outline of my skull top is probably nearly visible on the side of Building 43 by now.

Who speaks for Google’s ordinary users? That’s the perennial question as we approach Google’s 20th birthday, almost exactly two months from now.

Google’s communications division speaks mainly to the press. Google developer and design advocates help to make sure that relevant developer-related parties are heard by Google’s engineering teams. 

But beyond these specific scopes, there really aren’t user advocates per se at Google. In fact, a relevant Google search yields entries for Google design and developer advocates, and for user advocates at other firms. But there’s no obvious evidence of dedicated user advocate roles at Google itself.

Words matter. Precision of word choices matters. And in thinking about this recently, I’ve realized that my traditional use of the term “ombudsman” to address these concerns has been less than optimal.

Part of the reason for this is that the concept of “ombudsman” (which can be a male or female role, of course) carries with it a great deal of baggage. I realized this all along and attempted to explain that such roles were subject to definition within any given firm or other organization. 

But ombudsman is a rather formal term and is frequently associated with a person or persons who mainly deal with escalated consumer complaints, and so the term tends to carry an adversarial implication of sorts. The word really does not encompass the broader meanings of advocacy — and other associated communications between firms and users — that I’ve been thinking about over the years — but that I’ve not been adequately describing. I plead guilty.

“User advocacy” seems like a much more accurate term to approach the concepts that I’ve been so long discussing about Google and its everyday users.

Advocacy, not contentiousness. Participation, not confrontation. 

While it would certainly be possible to have user advocates focused on specific Google products and services, the multidisciplinary nature of Google suggests that an “at large” user advocate, or a group of such advocates working to foster user communications across a wide range of Google’s teams, might be more advantageous all around.

Google and Googlers create excellent services and products. But communications with users continues to be Google’s own Achilles’ heel, with many Google-related controversies based much more on misunderstandings than on anything else.

A genuine devotion to user advocacy, fostered by Googlers dedicated to this important task, could be great for Google’s users and for Google itself.

–Lauren–

Google’s New Security Warning Is Terrifying Many Users

I’ve been getting email from people all over the world who are suddenly scared of accessing particular websites that they’ve routinely used. It was quickly obvious what is going on — the first clue was that they were all users running Chrome Beta. 

The problem: Google’s new “Not Secure” warning on sites not using https security is terrifying many people. Users are incorrectly (but understandably) interpreting “Not Secure” to mean “Dangerous and Hacked! Close this page now!”

And this is squarely Google’s fault.

Years ago, I predicted this outcome. 

Though I’ve long promoted the migration to secure Web connections via https, I’ve also repeatedly warned that there are vast numbers of widely referenced sites that provide enormous amounts of important information to users, often from archives and systems that have been running for many, many years — sometimes since before the beginnings of Google 20 years ago.

The vast majority of these sites don’t require login. They don’t request information from users. They are utterly read-only.

While non-encrypted connections to them are theoretically subject to man-in-the-middle attacks, the real world likelihood of their being subjected to such attacks is extraordinarily low.

Another common factor with many of these sites is that they are operating on a shoestring, often on donated resources, without the expertise, money, or time to convert to https. Many of these systems are running very old code, conversion of which to support https would be a major effort — even if someone were available to do the work.

Despite ongoing efforts by “Let’s Encrypt” and others to provide tools to help automate the transition to https, the reality is that it’s still usually a massive amount of work requiring serious expertise, for all but the smallest and simplest of sites — and even that’s for sites running relatively current code.

Let’s be utterly clear about this. “Not Secure” does not mean that a site is actually hacked or dangerous in any way, nor that its data has been tampered with in transit. 

But to many users — not all of whom are well versed on the fine points of Internet security, eh? — that kind of warning displayed in that manner is a guarantee of more unnecessary confusion and angst among large categories of users, many of whom are already feeling disadvantaged by other aspects of the Web, such as Google’s continuing accessibility failures in terms of readability and other user interface aspects, disproportionately affecting these growing classes of users.

With Google about to promote their “Not Secure” warning from Google Beta to the standard Google Stable that most people run, these problems are about to grow by orders of magnitude.

Through their specific interface design decisions in this regard, Google is imposing an uncompensated cost on many sites with extremely limited resources, a cost that could effectively kill them.

Might doesn’t always make right, and Google needs to rethink the details of this particular approach.

–Lauren–

FedEx to Anyone With Less than Perfect Vision: GO TO HELL!

It appears that shipping giant FedEx has joined the “Google User Interface Club” and introduced a new package tracking user interface designed to tell anyone with less than very young, very excellent vision that they can just go take a giant leap and are not desirable as customers — either to send or receive packages via FedEx.

As you can see in the screenshot at the end of this post (if you can actually see FedEx’s incredibly low contrast fonts that is — trust me, they are actually there!), FedEx has transitioned from their traditional easy to read interface to the new “Google Standard” interface — low contrast fonts that are literally impossible for many people to read, and extremely difficult to read for many others without Superman-grade vision.

I’ve written about these kinds of accessibility failures many times in the past — I suspect that some of them may rise to the level of violations of the Americans with Disabilities Act (ADA). They are designed to look pretty — and technically may even meet some minimum visibility standards if you have a nice, new, perfectly adjusted screen. But it doesn’t take a computer scientist to realize that their real world readability is a sick joke, a kick in the teeth to anyone with aging eyes or otherwise imperfect vision.

The U.S. Postal Service recently moved their tracking interface in this same direction, and while theirs is bad, it’s not quite as much of an abomination as this new FedEx monstrosity.

Google pushed this trend along, with many of their relatively recent interfaces representing design nightmares in terms of readability and usability for users who are apparently not in Google’s “we really care about you” target demographics. Google’s recent refresh of Gmail has been a notable and welcome exception to this trend. I’m hoping that they will continue to move in a positive direction with other upcoming interfaces, though frankly I’m not holding my breath quite yet.

In the meantime, it’s FedEx who deserves an immediate kick in the teeth. Shame on you, FedEx. For shame.

–Lauren–

How the Pentagon Is Trying to Shame Google and Googlers

I hadn’t been planning to say much more right now about Google and “Project Maven” — the Defense Department project in which Google will wisely not be renewing participation when the existing contract ends next year (https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document).

But as usual, the Pentagon just doesn’t know when to leave well enough alone, and I am very angry today to see that a Pentagon-affiliated official is attempting to “death shame” Google and its employees regarding their appropriate decision not to renew with Maven. 

This particularly upsets me because I’ve been to this rodeo before. Over the years I’ve turned down potential work — that I really could have used! — because of its direct relationship to actual battlefield operations. And in various of those cases, there were attempts made to “death shame” me as well — to tell me that if I refused to participate in those aspects of the military-industrial complex, I would be morally complicit for any potential U.S. forces deaths that might theoretically occur due to lack of my supposed expertise.

This is a technique of the military that is as old as civilization. Various technologists reaching back to the days of Mesopotamia — and likely earlier — have been asked (or been required, often under threat of death) — to provide their services for ongoing military operations.

What makes this so difficult is that typically it’s impossible to clearly separate defensive from offensive projects. As I’ve previously noted, all too often what appears to be defensive work morphs into attack systems, and in the hands of some leaders (especially lying, sociopathic ones) can easily end up extinguishing vast numbers of innocent lives.

This was explicitly acknowledged in the infuriating words earlier today by a former top U.S. Defense Department official — former Deputy Defense Secretary Robert O. Work, who initiated Project Maven:

“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves.”

He also suggested that Google was being hypocritical, because in his view their AI research cooperation with China would benefit China’s military.

His statements are textbook Pentagon doublespeak, and his assertions are not only fundamentally disingenuous, but are blatant attempts at false equivalences.

Particularly galling is his “might wind up with us taking a shot” reference, as if to say that offensive operations were merely a minor footnote in the battle plan. But when you’re dealing with operational battle data, there are no minor footnotes in this context — that data analysis will be used for offensive operations — you can count on it.

To be clear, the righteous defense of the USA is an admirable pursuit. But if one chooses to go all in with the military-industrial complex to that end, it’s at the very least a decision to be made with “eyes wide open” — not with false assumptions that your work will be purely defensive.

And for those of us who refuse to work on military projects that will ultimately be used offensively — keeping in mind the horrific missteps of presidents far less twisted and bizarre than the one currently in the Oval Office — there is absolutely no valid shame associated with that ethical decision.

There’s a critical distinction to be made between basic research and operational battle projects. It’s much the same distinction as my willing work on the DoD ARPANET project decades ago — that led directly to the Internet that you’re using right now — vs. a range of ongoing, specifically battle-oriented projects with which I refused to become associated.

This is also what gives the lie to Robert Work’s attempt to discredit Google’s AI work with China. Open AI research is like Open Source software itself — usable for good or evil, but open to all and light years away from projects primarily with battle intents.

Google and other firms — including their managements and employees — will of course need to find their own paths forward in term of what sorts of work and contracts they are willing to pursue that may involve the Department of Defense or other military-associated organizations. As we’ve seen with ARPANET, some basic research work funded by the military can indeed yield immense positive benefits to the country and the world.

Personally, I find the concept of a dividing line between such basic research — as opposed to clearly battle-oriented projects — to be a useful guide toward determining which sorts of projects meet my own ethical standards — and which ones do not. As the saying goes, your mileage may vary.

But in any case, we should all utterly ignore Robert Work’s repulsive attempt to shame Googlers and Google itself — and relegate his way of thinking to the dustbin of history where it truly belongs.

–Lauren–

How the Dominant ISPs Are Trying to Scare People Into Opposing California Net Neutrality

Are there any sordid depths to which the crooked, lying, dominant ISPs won’t go to try terrify people into opposing Net Neutrality in California? Nope, let’s face it, these firms spout outright lies as if they were Donald Trump. Yep, seriously evil, as this robocall voicemail currently in circulation so clearly demonstrates! – https://lauren.vortex.com/crooked-isps-ca-822.mp3

–Lauren–

A Modest Proposal: Identifying Europeans on the Internet for Their Protection

With European politicians and regulators continuing to churn out proposed regulations to protect their citizens from the evils of the Internet, via “The Right To Be Forgotten” — and the currently under consideration Article 11 “link tax” and Article 13 content filtering censorship proposals — it is becoming more important than ever that Internet sites around the world be able to identify European users so that they may be afforded “appropriate” treatment at those sites, including blocking from all services as necessary.

Already, some Europeans are suggesting that they will attempt to evade the restrictions that have been implemented or proposed by their beneficent and magnificent leaders. The world must band together to prevent Europeans users from pursuing such a tragic course of actions.

Obviously, all VPN usage by Europeans that attempt to obscure the European geographic locations of their source IP addresses must be banned. In fact, it would be even safer for Europeans if all usage of VPNs by Europeans were prohibited by their governments, except under extraordinary circumstances requiring government licenses and monitoring for inappropriate usage.

All web browsers used by Europeans should be required to send a special “protected European resident” flag to server sites, so that those sites may determine the appropriate blocking or other disposition of those browser requests. Use of unapproved browsers or tampering with browsers to remove this protection flag would of course be a criminal act.

We must also solve the problem of Europeans traveling outside of Europe, where they might be tempted to use public Internet access systems that do not meet the high standards of protection required by European regulations.

One possible solution to this dilemma would be to require the permanent implantation of RFID identification capsules in all Europeans who travel beyond the protected confines of Europe. Don’t worry — these need not individually identify any given person, they need only identify them as European. Scanning equipment at public computers around the planet could detect these implants and automatically apply appropriate European protection rules. Europeans would be free to travel the world with no fears of accidentally using systems that did not apply their government’s protective regulations!

This modest proposal of course only scratches the surface of the sorts of solutions that will be needed to help assure that EU citizens fully and completely abide by their governments’ benevolent actions and requirements.

But the EU and its residents can feel confident that the rest of the world’s Internet will do its part to help keep Europeans safe, secure, and law-abiding at all times!

–Lauren–

Google’s New AI Principles Are a Model for the World

In the wake of Google’s announcement that they will not be renewing their controversial “Project Maven” military AI contract when it expires next year (“Google — and the Defense Department’s Disturbing ‘Maven’ A.I. Project Presentation Document” – https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document), Google has now published a post describing their policy positions regarding AI at Google going forward: “Artificial Intelligence at Google: Our Principles” (https://www.blog.google/topics/ai/ai-principles).

Since I was on balance critical of Google’s participation in Project Maven, but am very supportive of AI overall (“How AI Could Save Us All” – https://lauren.vortex.com/2018/05/01/how-ai-could-save-us-all), I’ve received a bunch of queries from readers asking how I feel about Google’s newly announced AI principles statement.

“Excellent” is my single word summary, especially in terms of the principles being balanced — and above all — realistic.

AI will be a critical tool going forward, both in terms of humanity and the global ecosystem itself. And like any tool — reaching all the way back to a chunk of rock on the ground in a prehistoric cave — AI can be used for good purposes, evil purposes, and in a range of “gray area” scenarios that are more difficult to cleanly categorize one way or the other.

It’s this last set of concerns, especially AI applications with multiple uses, that I’m particularly glad to see Google addressing specifically in their principles post.

For those of us who aren’t psychopaths or sociopaths, most fundamental characteristics of good and evil are usually fairly obvious. But as one grows older, it becomes apparent that the real world is not typically made up of black and white situations where one or another set of these characteristics exist in isolation — much more often we’re dealing with a complicated kaleidoscope of interrelating issues.

So — to address one point that I’ve been most asked about over the last couple of days regarding Google’s AI statement — it is entirely appropriate that Google explicitly notes that they will not be abandoning all aspects of government and military AI work, so long as that work is not likely to cause overall harm. 

In a “perfect” world we might not need the military — hell, we might not even need governments. But this is not a perfect world, and it’s one thing to use AI as a means to kill ever more people more efficiently, and something else entirely to use AI defensively to help protect against the genuine evils that still pervade this planet, as Google says it will do.

AI is still in its relative infancy, and attempts to accurately predict its development (beyond the very short term) are likely doomed to failure. AI principles such as Google’s will always by necessity be works in progress, and Google in fact explicitly acknowledges this fact.

But ethical firms and ethical governments around the world could today do themselves, their employees, and their citizens proud by accepting and living by AI principles such as those that Google has now announced.

–Lauren–

Why We May Have to Cut Europe Off from the Internet

UPDATE (March 28, 2019): Early this week, the EU passed this horrific legislation into law. How the individual member countries of the EU will implement it is anyone’s guess — utter chaos is certain, and drastic measures by the rest of the world to protect their own Internet services and users from such EU madness will indeed likely be necessary.

UPDATE (July 5, 2018): In a rare move, the EU Parliament voted today to block this current copyright legislation, which opens it to amendments by the entire membership of Parliament, leading to a new vote this coming September. So the war against this horrific legislation is by no means over, but this is an important battle won for now.

– – –

It’s no joke. It’s not hyperbole. If the European Union continues its current course, the rest of the world may well have to consider how to effectively “cut off” Europe from the rest of the Internet — to create an “Island Europe” in an Internet communications context. 

For those of us involved with the Net since its early origins, the specter of network fragmentation has long been an outcome that we’ve sorely hoped to avoid. But continuing EU actions could create an environment where mechanisms to tightly limit Europe’s interactions with the rest of the global Internet may be necessary — not imposed with pleasure, not with vindictiveness, but for the protection of free speech around the rest of the planet.

The EU will later this month be voting on a nightmarish copyright control scheme (“Article 13”) that would impose requirements for real-time “copyright filtering” of virtually all content uploaded to major and many minor Internet sites, with no protections against trolling, and the certainty of inappropriately blocking vast quantities of public domain and other materials, with no real protections against errors and no effective avenues for appeals. Please see:

“On June 20, an EU committee will vote on an apocalyptically stupid, internet-destroying copyright proposal that’ll censor everything from Tinder profiles to Wikipedia” (https://boingboing.net/2018/06/07/thanks-axel-voss.html).

Even if this specific horrific proposal is voted down, it’s important to review how we came to this juncture, as the EU has increasingly accelerated its program to become the Internet’s global censorship czar, in ways that even countries like China and Russia haven’t attempted to date.

As far back as 2012 and earlier, in “The ‘Right to Be Forgotten’: A Threat We Dare Not Forget” (https://lauren.vortex.com/archive/000938.html), I warned of the insidious nature of content censorship schemes flowing forth from Europe, and I’ve consistently warned that — like the proverbial camel’s nose under the tent — Europe would never be satisfied with any concessions offered by Internet firms. 

Time has borne out my predictions. In ensuing years, the EU has expanded its demands until now it considers itself in key respects to be the global arbiter of what should or should not be seen by Internet users around the world. 

Like other of civilization’s information control tyrants, a taste of censorship powers by the EU has inevitably led to utter censorship gluttony, and the sense that “we know best what those stupid little people should be allowed to see” is as old as human history, long predating modern communications systems.

European citizens are of course free to elect whatever sorts of governments that they choose. If that choice is for information control tyrants whose pleasure is to victimize their own citizens, so be it.

But if Europe continues to insist that its tyranny of censorship and information control must be honored by the rest of the world, then the rest of the world will be reluctantly forced to treat Europe as an Internet pariah, and use all possible technical means to isolate Europe in manners that best protect everyone else’s freedom of Internet speech. 

–Lauren– 

When Google Blames Users for Privacy Problems


One of my favorite user interface (UI) design adages is pretty much simplicity itself:

When you blame the users, you’ve already lost the argument.

I’m reminded of this by Google’s public reactions to a recent study revealing that almost a third of nearly 10,000 sampled Google G Suite commercial customers were unwittingly exposing sensitive corporate and/or customer data to the public Internet without access protections: “Widespread Google Groups Misconfiguration Exposes Sensitive Information” (https://www.kennasecurity.com/widespread-google-groups-misconfiguration-exposes-sensitive-information/).

Without getting into the technical details here, the underlying issues relate to the multiplicity of settings that control public access to Google Groups and their associated mailing lists. While Google defaults these to their most secure settings, the sheer quantity of misconfigured, potentially information leaking sites represents an empirical proof that a very significant number of G Suite users and administrators are not adequately understanding these settings, with resulting privacy-negative impacts.

Google’s response — in essence — has been “RTFM” (Read The F‑‑‑‑‑‑ Manual): The settings are there, if you’re not using them correctly, that’s your problem, not ours!

And while Google has posted some additional related info (e.g. on their G Suite Updates Blog), those explanations mostly stand to emphasize the relative complexity of the interface, and no changes that I’m aware of have been made to the interface in response to these concerns.

The situation is a bit reminiscent of auto manufacturers who resisted redesigning key aspects of their vehicles, even as it became ever more obvious that significant numbers of drivers were having accidents due to existing design elements.

As far as I’m concerned, the scope of the reported G Suite privacy leakage problems indicates nothing less than a privacy design failure in this instance.

Rather than trying to make excuses for an existing user interface that is clearly failing significant numbers of customers (and with G Suite, we’re talking about paying customers!), Google needs to take an immediate and hard look at the specific design aspects that are enabling these misconfiguration-based confidential information exposures.

A practical fix might not even involve major changes to the UI, and might be adequately served by mechanisms as simple as more in-your-face “pop-up” warnings to users and administrators, appearing in conjunction with additional confirmation dialogues when associated privacy-sensitive settings are being altered.

But clearly, explanatory blog posts aren’t going to cut the mustard for these kinds of problems, and I urge Google’s world-class privacy team to effectively address this situation as soon as possible.

–Lauren–

Hate Speech — and Google’s Public Relations “Death Wish”

I’ve been writing publicly for a long time. Sometimes it feels like my earliest articles and posts were composed in runic alphabets inscribed on stone tablets. I’ve always had a rule that I’ve tried to abide by: “Never write when you’re angry!”

Today I’ll violate that self-imposed prohibition. I’m in a vile mood, and I’m here at the keyboard anyway.

Those of you who have followed my writings (and have still somehow managed to maintain a semblance of sanity) know that I frequently deal with Google-related issues. I started doing that shortly after Google first appeared on the Net, and here we are now almost 20 years later. 

I was pretty tough on Google back then. I was unhappy with their privacy practices at the time and some other related issues, and I was not reluctant to present my feelings about such matters. Similarly, as Google evolved over the years into a world-class example of privacy and security best practices and has done so much other good work, I’ve enthusiastically pointed out the efforts of the Google teams involved. And when I feel that Google has screwed up regarding something these days, I point that out directly as well.

My policy of always trying to honestly write about issues using a “call ’em as I see ’em” philosophy has left a lot of partisans unhappy on both sides of the political spectrum, who view any variance from “the party line” on any given matter to be both dangerous and intolerable.

This has been a reality to one extent or another since the earliest ARPANET days when I first began publicly posting, but has in recent years blown up into an orders of magnitude more vicious state of affairs.

For example, late last week I spoke about Google on a national radio venue where I very frequently guest, and pushed  back against the false claims of some national GOP politicians, who were again parroting the Big Lie that Google purposely suppressed and undermined conservative viewpoints (the trigger this time was a search results Knowledge Panel error due to a defaced Wikipedia source page).

I’m usually happy to do this — I get paid nuthin’ for these appearances — I value the opportunity to speak some truth before these very large audiences that all too often are trapped in propagandistic, anti-technology filter bubbles where outright lies about firms like Google are common currency.

It’s gratifying to so frequently the next day get emails that say variations of “Thanks for that — nobody ever explained it to me that way before!”

But over time, and especially since the 2016 elections, the worst aspects of our toxic political environment have been contaminating more and more of these discussions, to the extent that my on-air comments supporting Google last week — perhaps because Donald Trump, Jr. was involved — have triggered a hate speech campaign that is rather sickening to behold. 

This has happened before — and I have a pretty thick skin.

Yet this time it feels different. I find myself wondering why the blazes I keep sticking my neck out this way. This isn’t my job. I don’t get paid for anything I write or say these days — I’m long term unemployed and try to get by with whatever sporadic and limited consulting I can dredge up from time to time.

More to the point, one wonders — especially with so much at stake — why Google isn’t taking a more proactive stance to protect the company, their employees, and the global community that depends on them — from the ongoing torrent of politically-motivated lies and attacks that are clearly designed to set the stage for broad censorship and government micromanagement of data for political purposes! Why doesn’t Google have employees out there doing what I’m doing? Why does Google continue to create a vacuum through their silence, a vacuum that haters fill with outright lies that most onlookers have no simple way to differentiate from the truth?

Of course we already know part of the answer. Google is famously terrified of the so-called “Streisand Effect” — the fear that even retorting lies will lend credence and more attention to them.

20 or even arguably 10 years ago, this might on balance have been a reasonable philosophy for Google to practice as a cautious firm.

But today, I’m increasingly convinced that Google’s not fighting back against these lies in every possible legitimate way amounts to a kind of corporate “death wish” that is ultimately putting everything good that Google has built and stands for at terrible risk.

And if Google loses this war, we all lose.

Governments, politicians, and other entities (including not only the alt-right but also many elements of more conventional left and right-wing politics as well), are using Google’s reticence for battle as a green light for the acceleration of anti-Google efforts to push intolerant information-control agendas on national, transnational, and global scales.

If such forces succeed in decimating Google in the manners that are being postulated, the results could be catastrophic for free speech around the planet.

Knowing Googlers as I do, it seems certain that most of them see these dangers very clearly from the inside — yet the “death wish” in terms of how Google actually communicates with the outside world seems more encompassing than ever.

This makes me very sad — and as I said above very angry as well.

The deep, dank pit looms before us, and the razor sharp blade of the pendulum descends closer with every tick of the clock. Either we deal with these issues seriously and effectively now, or very soon we’ll find that our wonderful hoped-for tomorrows have turned into nothing but a putrid, rotting pile of wasted yesterdays.

And that’s the truth.

–Lauren–

Google — and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

– – –

A few months ago, in “The Ethics of Google and the Pentagon Drones” –  https://lauren.vortex.com/2018/03/06/the-ethics-of-google-and-the-pentagon-drones – I discussed some of the complicated nuances that can come into play when firms like Google engage with military contracts that are ostensibly for defensive purposes, but potentially could lead to offensive use of artificial intelligence technologies as well. This is not a simple matter. I was myself involved with Defense Department projects many years ago (including the Internet’s ancestor ARPANET itself), as I explained in that post.

The focal point for concerns inside Google in this regard (triggering significant internal protests and some reported resignations) revolve around the U.S. Department of Defense (DoD) “Project Maven” — aimed at using A.I. technology for drone image analysis, among other possibilities.

Now a 27 page DoD presentation document regarding Maven is in circulation, and frankly it is discomforting and disturbing to view. It is officially titled:

“Disruption in UAS: The Algorithmic Warfare Cross-Functional Team (Project Maven)”

And it sends a chill down my spine precisely because it seems to treat the topic rather matter-of-factly, almost lightheartedly.

There are photos of happy surfers. The project patch features smiling, waving cartoon robots who would fit right into an old episode of “The Jetsons” — with a Latin slogan that roughly translates to “Our job is to help.” Obviously DoD has learned a lesson from that old NSA mission patch that showed an enormous octopus with its tentacles draped around the Earth.

You can see the entire document here:

https://www.vortex.com/dod-maven

I stand by my analysis in my post referenced above regarding the complicated dynamics of such projects and their interplay with technology firms such as Google.

However, after viewing this entire Project Maven document, I have a gut feeling that long-term participation in this project will not turn out well for Google overall.

To be sure, there will likely be financial gains related to resources provided to DoD for this project — but at the cost of how much good will inside the company among employees, and in terms of potentially negative impacts on the firm’s public image overall?

Certainly the argument could be made that it’s better that a firm with an excellent ethical track record like Google participate in such projects, rather than only traditional defense contractors — some of whom have a long history of profiting from wars with little or no regard for ethical considerations.

But over the years I’ve seen good guys get trapped by that kind of logic, and once deeply immersed in the battlefield military-industrial complex it can be difficult to ever extricate yourself, irrespective of good intentions.

Thankfully from my standpoint, this isn’t a decision that I have to make. But while I don’t claim to have a functional crystal ball, I’ve been around long enough that my gut impressions regarding situations like this have a pretty good track record.

I sincerely hope that Google can successfully find its way through this potential minefield. For a great company like Google with so many great employees, it would be a tragedy indeed if issues like those related to Project Maven did serious damage to Google and to relationships with Googlers going forward.

–Lauren–

Calls for a Google Ombudsman — from Nine Years Ago!

Back in 2009, “Techdirt” posted “Does Google Need An Ombudsman?” – https://www.techdirt.com/articles/20090302/0125093942.shtml — excerpted below. Here we are nine years later, and that need is demonstrably far greater now! “Techdirt” back then was referring to some of my earliest of what would ultimately be many posts about this topic.

– – – – – – – –

Lauren Weinstein has an interesting discussion going on his blog, noting a series of recent incidents where Google has done a spectacularly poor job in communicating with the public — something I’ve been critical of the company about, as well. The company can be notoriously secretive at times, even when being a lot more transparent would help. Even worse, the company is quite difficult to contact on many issues, unless you happen to know people there already. Its response times, if you go through the “official channels,” are absolutely ridiculous (if they respond at all). Weinstein’s suggestion, then, is that Google should set up a small team to play an ombudsman role — basically acting as the public’s “representative” within the company …
         —  Mike Masnick – “Techdirt” – March 3, 2009

 – – – – – – – –

–Lauren–

I Join EFF in Opposing the California SB 1001 “Bots Disclosure” Legislation

The Electronic Frontier Foundation recently announced their opposition to California Senate Bill SB 1001, which mandates explicit “I am not a human” disclosure notices relating to all manner of automated reply, response, and other computer-based systems.

While it’s certainly the case that considerable controversy was triggered by Google’s demonstration earlier this month of their AI-based “Duplex” phone calling system ( “Teachable Moment: How Outrage Over Google’s AI ‘Duplex’ Could Have Been Avoided” – https://lauren.vortex.com/2018/05/11/teachable-moment-how-outrage-over-googles-ai-duplex-could-have-been-avoided), Google reacted quickly and appropriately by announcing that production versions of the system would identify themselves to called parties.

Voluntary approaches like this are almost always preferable to legislative “fixes” — the latter all too often attempt to swat flies using nuclear bombs, with all manner of negative collateral damage.

Such is the case with the California Senate’s SB 1001, which would impose distracting, confusing, and disruptive labeling requirements on a vast range of online systems of all sorts, the overwhelming majority of which are obviously not pretending to be human beings in misleading ways.

Even worse, the legislation states that these systems are assumed to purposely be attempting to mislead unless they explicitly identify themselves as being non-humans. This is a ludicrous assumption — the legislation would be at least a bit more palatable if it was restricted to situations where a genuine intent to mislead was present, such as automated telemarketing phone spam.

The labeling requirements imposed by SB 1001 would make the obnoxious scourge of “We use cookies! Click here if you understand!” banners (the result of misguided EU regulatory actions) look like a walk in the park by comparison.

While automated communications systems will not be immune to misuse, SB 1001 will not stop such abuse and will cause massive confusion for both site operators and users. It is not only premature, it is a textbook example of overly broad and badly written legislation that was not adequately thought through.

SB 1001 should not become law.

–Lauren–

Android In-App Payments Abuse Nightmares: Why Google Is Complicit

UPDATE (May 26, 2018): To be clear about this, I would so much prefer that Google had an Ombudsman, Ombudsman team, or similar set of roles internally, to deal with situations as described in this updated post. While I’m glad to try help when I can, and I greatly thank Google for their quick response in this case and the issuing of a full refund to this Android user, it shouldn’t require public actions from someone on the outside of Google like me to drive the appropriate resolution of such cases.

UPDATE (May 25, 2018): I’ve just been informed that a full refund has now been issued in the case I discussed in my post below from yesterday. I hope that the general class of issues described therein, especially the presence of expensive in-app “virtual” purchases targeted at children — and the specific operations of Android parental control mechanisms — will still be addressed going forward. In the meantime, my great thanks to Google for quickly doing the right thing in this case of a (now very happy) Android user and her child. 

– – –

Should an Android app aimed at children include a $133 in-app purchase for worthless virtual merchandise? If you’re the kind of crook who runs fake real estate “universities” and stiffs your workers via multiple bankruptcies, you’d probably see nothing wrong with this. But most ethical people might wonder why Google would permit such an abomination. Is the fact that they take a substantial cut of each such transaction clouding their usually excellent ethical sensibilities in this case? Or is Google somehow just unaware, underestimating, or de-emphasizing the scope of these problems?

Complaints regarding in-app Android purchases arrive at my inbox with notable regularity. But one that arrived recently really grabbed my attention. Rather than attempt to summarize it, I’m including extended portions of it below (anonymized and with the permission of the authors).

Beyond the details of how parental locks and Google Play Store payment systems are designed and the ways in which they could be greatly improved, a much more fundamental problem is at the core of these issues. 

I have long considered in-app purchase models to be open for enormous abuse. Where they are used to “unlock” actual capabilities in non-gaming applications, they can play a useful role. But their use for the purchase of worthless “virtual” goods or points in games, especially when total purchases over the lifetimes of the games can add up to more than a few dollars, are difficult to justify. They are impossible to justify in games that are targeted at children. 

Though apparently entirely legal, it is unconscionable that Google permits these sorts of apps to exploit children and their parents, and then refuses to offer full refunds to parents who have been victimized as a result, particularly when those parents have attempted to diligently use the payment control mechanisms that are currently available.

Not Googley at all. Shame on you, Google.

–Lauren–

 – – – – – –

Hi Lauren,

Thanks so much for considering this. is:  @gmail.com  – she’s fine with you sharing that with Google.

If it can happen to someone of her education what hope do the rest of us have… let alone a 4-year old who can’t read. She says also it’s fine to share her story, fully anonymised … It’s pretty horrible and I suspect also pretty widespread too….

On 05/23 09:16, wrote:

hi Lauren,

I’m sure you’ve heard lots of these kinda stories, so your indulgence is requested. Friend of mine – who holds a doctorate in business, no less – got a bill for around GBP 650 after her 4-year old daughter was able to buy in-game despite parental locks. Or, that’s what my friend thought: Google said that updating the unit could wipe out those locks. And no refund is thus forthcoming. She has contacted the app developers too but obviously they’re happy enough with her money so nothing doing there.

Two things:

(1) Why does an update clear locks? This is surely bad practice?

(2) How the hell can anyone justify a GBP 100 in-app purchase in a game directed at toddlers? This one can’t read yet and as we know, kids are experts on using touchscreen tech before any language skills develop.

P.S. any advice welcome – thanks loads

– – –

My 4 year old loves watching . On (Freeview) one of her favourite cartoons is . She loves this so much that she asked if she could download on my mobile phone to play. I obliged and made the usual checks; no ads, and parental locks engaged. She then asked to download another similar game; . She absolutely loves this game, and for a 4 year old, she’s got pretty good at it… certainly better than me and her big sis.

Again, I made sure parental lock and no ads were ticked within the app…. Last Friday I received a telephone call from the Fraud dept. at  Credit Card, they suspected fraudulent activity on my card – in fact one transaction of GBP 99.99 and another of GBP 1.99 had gone on my card that morning.. and I hadn’t even left my house. I was obviously shocked and concerned – they said the payee was Google Play.

They asked if I had an android phone and whether I let me kids play on the phone. I said yes, but all games are ‘locked down’ so to speak. She asked me to go into my phone to check… to my sheer horror, I saw a long list of  ‘in-app’ purchases made by my 4 year old within the space (mostly) of three weeks. Now I usually check my credit card spends at the end of every month, and I hadn’t got around to checking for this month. I quickly toted up the separate transactions and figured that she had burned GBP 498.88  buying ‘.. GBP 99.99 and ‘ 1.99/ 29.99’ within the game.

I was totally in shock and rightly upset. Of course this wasn’t her fault  – she can’t read.. but how can an app associated with a children’s cartoon think its OK to embed in app purchases within their game … Google have informed me that updating my android can wipe out all the parental locks etc, and I have to check/ re-engage all locks etc after EVERY software upgrade. I contacted Google, and they have disappointingly refunded only GBP 70.00 – stating that its outside their T&Cs and that I need to request a refund from ; the App developers.

I’ve emailed , they haven’t bothered to respond (I’ve waited 72 hours and counting now) . I’ve also contacted Credit Card, and they’ve said that they won’t help me… Surely this is ‘Soft Fraud’ and this is unethical and wrong… so parents please beware. This has and still is really upsetting for both me and my daughter. Please share and just be hyper careful on your phones. Here is most of her spending spree!! 

– – – – – –