Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

– – –

It’s already happening. Within a day of word that Google is reportedly planning to provide Chinese government-dictated censored search results and censored news aggregation inside China, the Google Haters are already salivating at the new ammunition that this could provide Congress to pillory Google and similarly castrate them around the world — for background, please see: “Censored Google Search for China Would Be Both Evil and Dangerous!” (

While Google has not confirmed these reports, the mere prospect of their being correct has already brought the righteous condemnation of human rights advocates and organizations around the globe.

And already, in the discussion forums that I monitor where the Google Haters congregate, I’m seeing language like “Godsend!” – “Miracle!” — “We couldn’t have hoped for anything more!”

It’s obvious why there’s such rejoicing in those evil quarters. By willingly allying themselves with the censorship regimes of the Chinese government that are used to repress and torment the Chinese people, Google would put itself in the position of being perceived as the willing pawn of those repressive Chinese Internet policies that have been growing vastly more intense, fanatical, and encompassing over recent years, especially since the rise of “president for life” Xi Jinping.

Already embroiled in antitrust and content management/censorship controversies here in the U.S., the European Union, and elsewhere, the unforced error of “getting in bed” with the totalitarian Chinese government will provide Google’s political and other enemies a whole new line of attack to question Google’s motives and ethical pronouncements. You can already visualize the Google-hating congressmen saying, “Whose side are you on, Google? Why are you helping to support a Chinese government that massively suppresses its own people and continues to commit hacking attacks against us?” We’ll be hearing the word “hypocritical” numerous times during numerous hearings, you can be sure. 

We can pretty well predict Google’s responses, likely to be much the same as they made back in 2006 during their original attempt at “playing nice” with the Chinese censors, an effort Google abandoned in 2010, after escalating demands from China and escalating Chinese hacking attacks.

Google will assert that providing some services — even censored in deeply repressive ways — is better than nothing. They’ll suggest that the censored services that would be provided would help the Chinese citizenry, despite the fact that the very results being censored, while perhaps relatively small in terms of overall percentages, would likely be the very search results that the Chinese people most need to see to help protect themselves from their dictatorial leaders’ information control and massive human rights abuses. Google will note that they already censor some results in countries like France and Germany (for example, there are German laws relating to Nazi-oriented sites).

But narrow removal of search results in functional democracies is one thing The much wider categories of censorship demanded by the Chinese government — a single-party dictatorship that operates vast secret prison and execution networks — is something else entirely. It’s like comparing a pimple with Mt. Everest. 

And that’s before the Chinese start escalating their demands. More items to censor. Access to users’ identity and other private data. Localization of Google servers on Chinese soil for immediate access by authorities.

Worst of all, if Google is willing to bend over and kowtow to the Chinese dictators in these ways, every other country in the world with politicians unhappy with Google for one reason or another will use this as an example of why Google should provide similar governmental censorship services and user data access to their own regulators and politicians. After all, if you’re willing to do this for one of the world’s most oppressive regimes, why not for every country, everywhere?

As someone with enormous respect for Google and Googlers, I can’t view these reports regarding Google and China — if accurate — as anything short of disastrous. Disastrous for Google. Disastrous for their users. Disastrous for the global community of ordinary users at large, who depend on Google’s search honesty and corporate ethics as foundations of daily life.

Joining with China in providing Chinese government-censored search and news results would provide haters and other evil forces around the planet the very ammunition they’ve been waiting for toward crushing Google, towards putting Google under micromanaged government control, toward ultimately converting Google into an oppressive government propaganda machine.

It could frankly turn out much worse for the world than if Google had never been created at all, 20 years ago.

I’m still hoping that these reports are inaccurate in key respects or in their totality. But even if they are correct, then Google still has time to choose not to go down this dark path, and I would strongly urge them not to move forward with any plans to participate in China’s repressive and dangerous totalitarian censorship regime.


Prediction: Unless Security Keys Are Free, Most Users Won’t Use Them

Various major Internet firms are currently engaged in a campaign to encourage the use of U2F/FIDO security keys (USB, NFC, and now even Bluetooth) to encourage their users to avoid use of other much more vulnerable forms of 2sv (2-factor) login authentication, especially the most common and illicitly exploitable form, SMS text messaging. In fact, Google has just introduced their own “Titan” security keys to further these efforts.

Without getting into technical details, let’s just say that these kinds of security keys essentially eliminate the vulnerabilities of other 2sv mechanisms, and given that most of these keys can support multiple services on a single physical key, you might assume that users would be snapping them up like candy.

You’d be wrong in that assumption.

I’ve spent years urging ordinary users (e.g., of Google services) to use 2sv of any kind. It’s a very, very tough slog, as I noted in:

Google Users Who Want to Use 2-Factor Protections — But Don’t Understand How:

But even beyond that category of users, there’s a far larger group of users who simply don’t see the point with “hassling” to use 2sv at all, resulting in what Google itself has publicly noted is a depressingly low percentage of users enabling 2sv protections.

Beyond logistical issues regarding 2sv that confuse many potential users, there’s a fundamental aspect of human nature involved.

Most users simply don’t believe that THEY are going to be hacked (at least, that’s their position until it actually happens to them and they come calling too late with desperate pleas for assistance).

Frankly, I don’t know of any “magic wand” solution for this dilemma. If you try to require 2sv, you’ll likely lose significant numbers of users who just can’t understand it or give up trying to make it work — bad for you and bad for them. They’re mostly not techies — they’re busy people who depend on your services, who simply do not see any reason why they should be jumping through what they perceive to be more unnecessary hoops — and this means that WE have not explained this all adequately and that OUR systems are not serving them well.

If you blame the users, you’ve already lost the argument.

Which brings us back to those security keys. Given how difficult it is to get most users to enable 2sv at all, how much harder will it be (even if the overall result is simpler and far more secure) to get users to go the security key route when they have to pay real money for the keys?

For many persons, the $20 or so typical for these keys is significant money indeed, especially when they don’t see the value of really having them in the first place (remember, they don’t expect to ever be hacked).

I strongly suspect that beyond “in the know” business/enterprise users, achieving major uptake of security keys among ordinary user populations will require that those keys be provided for free in some manner. Pricing them down to only a few dollars would help, but my gut feeling is that vast numbers of users wouldn’t pay for them at any price, perhaps often because they don’t want to set up payment methods in the first place.

That problem may be significantly reduced where users are already used to paying and have payment methods already in place — e.g. for the Android Play Store. 

But even there, $20 — even $10 — is likely to be a very tough sell for a piece of hardware that most users simply don’t really believe that they need. And if they feel that this purchase is being “pushed” at them as a hard sell, the likely result will be resentment and all that follows from that.

On the other hand, if security keys were free, methodologies such as:

How to “Bribe” Our Way to Better Account Security:

might be combined with those free keys to dramatically increase the use of high quality 2sv by all manner of users — including techies and non-techies — which of course should be our ultimate goal in these security contexts.

Who knows? It just might work!

Be seeing you.


Censored Google Search for China Would Be Both Evil and Dangerous!

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

UPDATE (August 3, 2018): Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 2, 2018): New reports claim that Google is also now working on a news app for China, that would similarly be designed to enable censoring by Chinese authorities. Google has reportedly replied to queries about this with the same non-denial generic statement noted below.

– – –

A report is circulating widely today — apparently based on documents leaked from Google — suggesting that Google is secretly working on a search engine interface (probably initially an Android app) for China that would — by design — be heavily censored by the totalitarian Chinese government. Want to look at a Wikipedia page? Forget it! Search for human rights? No go, and the police are already at your door to drag you off to a secret “re-education” center.

Google has so far not denied the reports, and today has discouragingly only issued generic “we don’t comment on speculation regarding future plans” statements. Ironically, this is all occurring at the same time that Google has been increasing its efforts to promote honest journalism, and to fight against fake news that can sometimes pollute search results.

There’s no way to say this gently or diplomatically: Any move by Google to provide government censored search services to China would not only be evil, but also incredibly dangerous to the entire planet.

The Chinese are wonderful people, but their government is an absolute dictatorship — now with a likely president for life — whose abuse of its own citizens and hacking attempts against the rest of the world have been increasing over recent years. Not getting better, getting far, far worse.

Information control and censorship is at the heart of China’s human rights abuses that include a vast network of secret prisons and undocumented mass executions. Say the wrong thing. Try to look at the wrong webpage. You can just vanish, never to be seen again.

The key to how the Chinese tyrants control their population is the government’s incredibly massive Internet censorship regime, which carefully tailors the information that the Chinese population can see, creating a false view of the world among its citizens — incredibly dangerous for a country that has a vast military and expansionist goals.

Anybody — any firm — that voluntarily participates in the Chinese censorship regime becomes an equal partner in the Chinese government’s evil, no matter attempts to provide benign justifications or explanations.

If this all sounds a bit familiar, it’s because we’ve been over this road with Google before. Back in 2006, I happened to be giving a talk at Google’s L.A. offices the same day that Google announced its original partnership with the Chinese government to provide a censored version of Google. My relevant comments about that are here:

Later related discussion that same year followed, including:

“Google, China, and Ethics” –

And then in 2010 when Google wisely terminated their participation in the oppressive Chinese censorship regime:

Bulletin: Google Will No Longer Censor Chinese Search Results — May End China Operations –

In the ensuing eight years, much has changed with China. They’re even more of a technological powerhouse now, and they’re even more dictatorial and censorship-driven than before. 

All the fears about censored Google search for China that we had back in 2006, including a vast slippery slope of additional dangers to innocent persons both inside and outside of China, are still in force — only now magnified by orders of magnitude.

It obviously must be painful for Google to sit by and watch their less ethical competitors cozy up to Chinese human rights abusing leaders, as those firms suckle at the teats of the Chinese government and its money. 

And in fact, Google has already made some recent inroads with China — with a few harmless apps and shared AI research — all efforts that I generally support in the name of global progress.

But search is different. Very different. Search is how we learn about how the world really works. It’s how we separate reality from lies, how we put our lives and our countries in context with the entire Earth that we all must share. The censorship of search is a true Orwellian terror, since it helps not only to hide accurate information, but by extension promotes the dissemination of false information as well.

It’s bad enough that the European Union forces Google (via the “Right To Be Forgotten”) to remove valid and accurate search results pointing to information that some Europeans find to be personally inconvenient. 

But if reports are correct that Google plans to voluntarily ally itself with Chinese dictators and their wholesale censorship of entire vast categories of crucial information — inevitably in the furtherance of those leaders’ continuing tyrannies — then Google will not only have gone directly and catastrophically against its most fundamental purposes and ideals, but will have set the stage for similar demands for vast Google-enabled mass censorship from other countries around the world.

I’m sorry, but that’s just not the Google that I know and respect.


Explaining YouTube’s VERY Cool New Aspect Ratio Changes

YouTube very quietly made a very cool and rather major improvement in their desktop video players today. I noticed it immediately this morning and now have confirmation both from testing with my own YT videos (for which I know all the native metadata) and via informal statements from Google.

YouTube is now adjusting the YT player size to match videos’ native aspect ratios. This is a big deal, and very much welcome.

Despite the fact that I’m publicly critical from time to time regarding various elements of YouTube content-related policies, this does not detract from the fact that I’m one of YT’s biggest fans. I spend a lot of time in YT, and I consider it to be a news, information, education, and entertainment wonder of the world. Its scale is staggering large — so we can’t reasonably expect perfection — and frankly I don’t even want to think about life without YT.

Excuse me while I put on my video engineering hat for a moment …

One of the more complicated facets of video — played out continuously on YouTube — is aspect ratios. A modern high definition TV (HDTV) video is normally displayed at a 16 (horizontal) by 9 (vertical) aspect ratio – significantly wider than high. The older standard definition TV ratio is 4:3 — just a bit wider than high, and visually very close to the traditional 35mm film aspect ratio of 3:2.

When displaying video, the typical techniques to display different aspect ratios on different fixed ratio display systems have been to either modify the actual contents of the visible video frames themselves, or to fit more of the original frames into the display area by reducing their overall relative sizes and providing “fillers” for any remaining areas of the display. 

The “modification of contents” technique usually has the worst results. Techniques such as “pan and scan” were traditionally used to show only portions of widescreen movie frames on standard 4:3 TVs, simply cutting off much of the action. Ugh. 

But eventually, especially as 4:3 television screens became larger in many homes, the much superior “letterboxing” technique came into play, displaying black bars on the top and bottom of the screen to permit the entire (or at least most) of widescreen film frames to be displayed on a 4:3 cathode ray tube. In the early days of this process, it was common to see squiggles and such in those bars — networks and local stations were concerned that viewers would assume that something was wrong with their televisions if empty black bars were present without some sort of explanations — and sometimes broadcasters would even provide such explanations at the start of the film — sometimes they still do even with HDTV! Very widescreen films shown on 16:9 displays today may still use letterboxing to permit viewing more of the frame that would otherwise exceed the 16:9 ratio.

When 16:9 HDTV arrived, the opposite of the standard definition TV problem appeared. Now to properly display a traditional 4:3 standard TV image, you needed to put black bars on the right and left side of the screen — “pillarboxing” is the name usually given to this technique, and it’s widely used on many broadcast, satellite, streaming, and other video channels. It is in fact by far the preferred technique to display 4:3 content on a fixed aspect ratio 16:9 physical display.

After YouTube switched from a 4:3 video player to their standard 16:9 player years ago, you started seeing some YT uploaders zooming in on 4:3 videos to make them “fake” 16:9 videos before uploading, to fill the 16:9 player — resulting in grainy and noisy images, with significant portions of the original video chopped off. The same thing is done by some TV broadcasters and other video channels, documentary creators, and others who have this uncontrollable urge to fill the entire screen, no matter what! These drive me nuts.

Up until today, YouTube handled the display of native 4:3 videos by using the pillarbox technique within their 16:9 player. Completely reasonable, but of necessity wasting significant areas of the screen taken up by the black pillarbox bars.

This all changed this morning. The YouTube player now adapts to the native aspect ratio of the video being played, instead of always being a fixed 16:9. This means for example that a native 4:3 video now displays in a 4:3 player, with no pillarboxing required — and with significant viewable screen real estate recovered to actually display video rather than pillarboxing bars. In effect, these videos today are displaying similarly to how they would have in the early days of YT — fully filling the video display area — as had been the case before YT switched to fixed aspect ratio 16:9 players. Excellent! 

The same goes for other aspect ratios, in particular such as 16:9 used by most more recent videos, so 16:9 videos will continue to display in a 16:9 player.

One aspect (no pun intended) to keep in mind. The player will apparently adapt to the native video resolution as uploaded. So if a video was uploaded as 4:3, you’ll get a 4:3 player. But if (for example) a 4:3 video was already converted to 16:9 by pillarboxing before being uploaded, YouTube’s encoding pipeline is going to consider this to be a native 16:9 video and display it in a 16:9 player with the black bar pillars intact. Bottom line: If you have 4:3 material to upload, don’t change its aspect ratio, just upload it as native 4:3, pretty please!

Since I watch a fair bit of older videos on YouTube that tend to be in 4:3 aspect ratio, the changes YT made today are great for me. But having the YT player adjust to various native aspect ratios is going to be super for all YT users in the long run. It may take a little time for you to adapt to seeing the player size and shape vary from video to video, but you’ll get used to it. And trust me, you’ll come to love it.

Great work by YouTube. My thanks to the entire YouTube team!


Uber and Lyft Must Immediately Ban “Peeping Tom” Drivers

In response to a news story revealing that an Uber driver has been (usually surreptitiously) live streaming video and most audio of his passengers without their knowledge or explicit consent — exposing them to ridicule and potentially much worse by his streaming audience, both Uber and Lyft have reportedly simply argued that the practice is legal in that particular (one-party recording permission) state. 

That kind of response is of course absolutely unacceptable and below reproach, demonstrating the utter lack of ethics of these ride sharing firms. They argue that this doesn’t even violate any of their driver terms.

That needs to change — IMMEDIATELY!

Drive sharing firms must ban their drivers from such behavior, and violators should be immediately excised from the platform.

That a vile behavior is legal does not mean that these firms — entrusted with the lives of millions of passengers — must permit drivers to engage in such activities. In fact, these firms already lay out specific “don’t do this!” rules that can prohibit a variety of legal activities by drivers — for the protection of their riders.

If these firms do not act immediately to end such practices by their drivers, they risk not only massive loss of rider trust, but are just begging for this kind of activity to eventually result in a horrific incident involving their passengers — perhaps physical abuse because identity information often leaks on these streams — at the hands of unscrupulous members of the live stream viewing public.

If these firms refuse to ban these practices, their rights to operate in any states where such behavior continues to occur must be withdrawn, and if necessary, legislation passed to force these firms to do the right thing and protect their riders from such abuses.


EU’s Latest Massive Fine Against Google Will Hurt Europeans Most of All

Have you ever heard anyone seriously say: “Man, there just aren’t enough shopping choices on the Net!” or, “I’d really like my smartphone to be more complicated and less secure!” or … well, you get the idea — nobody actually means stuff like that.

But sadly, this means nothing to the politicians and bureaucrats of the European Union, who are constantly trying to enrich themselves with massive fines against firms like Google, while simultaneously making Internet life ever more “mommy state” government micromanaged for Europeans.

The latest giant fine (which Google quite righteously will appeal) announced by the EU extortion machine is five billion dollars, for claimed offenses by Google related to the Android operating system, all actually aspects of Android that are designed to help users and to provide a secure and thriving foundation for a wide range of applications and user choice.

In fact, in the final analysis, the changes in Android that the EU is demanding would result in much more complicated phones, less secure phones, and ultimately LESS choice for users resulting from alterations that will make life much more difficult (and expensive!) for application developers and users alike.

Why do the EU politicos keep behaving as if they want to destroy the Internet?

It’s because in significant ways that exactly what they have in mind. They don’t like an Internet that the government doesn’t tightly control, where they don’t dictate all aspects of how consumers interact with the Net and what these users are permitted to see and do. Even now, they’re still pushing horrific legislation to create a Chinese-style firewall to vastly limit what kind of content Europeans can upload to the Net, and to destroy businesses that depend on free inbound linking. And these hypocritical EU officials are desperately trying to prop up failing businesses whose business models are stuck in the 20th (or even 19th) centuries, while passing all the costs on to ordinary Europeans — who by and large seem to be quite happy with how the Internet is already working.

And of course, there’s the money. Need more money? Hell, the EU always needs more money. Gin up another set of fake violations against Google, then show up in Mountain View with sticky fingers extended for another multi-billion dollar check!

The EU has become a bigger threat to the Internet than even China or Russia, neither of which has attempted (so far) to extend globally their highly restrictive views of Internet freedoms. 

And the saddest part is that these kinds of abuses by the EU are hurting EU consumers most of all. Over time, fewer and fewer Internet firms will even want to deal with this kind of EU, and Europeans will find their actual choices more and more limited and government controlled as a result.

That’s a terrible shame for Europe — and for the entire world.


Network Solutions and Cloudflare Supporting Horrific Racist Site

Today a concerned reader brought to my attention a horrifically racist site — apparently operating for a decade and currently registered via Network Solutions (NSI) — with DNS and other services through Cloudflare — called “n*” (and “n*”) — I have purposely not linked to them here, and you know why I have the asterisks there.

To call the site — complete with a discussion forum — a massive pile of horrific, dangerous, racist garbage of the worst kind would be treating it far too gently.

We already know that Cloudflare reportedly has an ethical sense that makes diseased maggots and cockroaches seem warm and friendly by comparison — Cloudflare apparently touts a content acceptance policy that Dr. Josef Mengele would have likely considered too extreme in its acceptance of monstrously evil content.

But Network Solutions claims to have higher standards (though it wouldn’t take much effort to beat Cloudflare in this regard) and I’m attempting to contact NSI officials now to determine if such racist sites are within their official policy standards. 

Oh, and by the way, guess what happens whenever you call the official Network Solutions listed phone number that they designate for “reporting abuse” — you get a recording (that doesn’t take a message) that says “they’re having difficulties — try again at a later time.”

Why are we not at all surprised?


Chrome Is Hiding URL Details — and It’s Confusing People Already!

UPDATE (September 17, 2018): Google Backs Off on Unwise URL Hiding Scheme, but Only Temporarily

UPDATE (September 7, 2018): Here’s How to Disable Google Chrome’s Confusing New URL Hiding Scheme

– – –

Here we go again. I’m already getting upset queries from confused Chrome users about this one. 

In Google’s continuing efforts to “dumb down” the Internet, their Chrome browser (beta version, currently) is now hiding what it considers to be “irrelevant” parts of site address URLs.

This means for example that if you enter “” and get redirected to “” as is my policy (and a typical type of policy at a vast number of sites), Chrome will only display “” as the current URL, confusing anyone and everyone who might have a need to quickly note the actual full address URL. Also removed are http: and https: prefixes, leaving even fewer indications when sites are secure — exactly the WRONG approach these days when users need more help in these respects, not less!

And of course, if you’re manually writing down a URL based on the shortened version, there’s no guarantee that it will actually work if entered directly back into Chrome without passing through possible site redirect sequences.

But wait! You said that you want additional confusion? By golly you’ve got it! If you click up in the address bar and copy the Chrome shortened URL, it will appear that you’re copying the short version, but you’re actually copying the invisible original version with the full site URL — including the full address and the http: or https: prefixes. If you double click up there, Chrome visibly replaces its mangled version with the full version.

I can just imagine how this “feature” pushed through Google — “Hell, our users don’t really need to see all that URL detail stuff, so we’ll just hide it all from them! They’ll never know the difference!”

But the truth is that from the standpoint of everyday users who glance quickly at addresses and greatly benefit from multiple signals to help them establish that they’ve reached the exact and correct sites in a secure manner, the new Chrome URL mangling feature is an abomination, and I’ll bet you dollars to donuts that crooked site operators will find some ways to leverage this change for their own benefits as well.

As I said, this is currently in Chrome Beta, which means it’s likely to “graduate” to Chrome Stable — the one that most people run — sometime fairly soon.

Google is a great company, but their ability to churn out unforced errors like this — that especially disadvantage busy, non-techie users — remains a particularly bizarre aspect of their culture.


Third Parties Reading Your Gmail? Yeah, If You’ve Asked Them To!

Looks like the “Wall Street Journal” — pretty reliably anti-Google most of the time — is at it again. My inbox is flooded with messages from Google users concerned about the WSJ’s new article (being widely quoted across the Net) “exposing” the fact that third parties may have access to your Gmail.

Ooooh, scary! The horror! Well, actually not!

This one’s basically a nothingburger.

The breathless reporting on this topic is the “revelation” that if you’ve signed up with third-party apps and given them permission to access your Gmail, they — well, you know — have access to your Gmail! 

C’mon boys and girls, this isn’t rocket science. If you hire a secretary to go through your mail and list the important stuff for ya’, they’re going to be reading your mail. The same goes for these third-party apps that provide various value-added Gmail services to notify you about this, that, or the other. They have to read your Gmail to do what you want them to do! If you don’t want them reading your email, don’t sign up for them and don’t give them permission to access your Google account and Gmail! 

Part of the feigned outrage in this saga is the concern that in some cases actual human beings at these third-party firms may have been reading your email rather than only machines. Well golly, if they didn’t explicitly say that humans wouldn’t read them — remember that secretary? — why would one make such an assumption?

In fact, while it’s typical for the vast majority of such third-party systems to be fully automated, it wouldn’t be considered unusual for humans to read some emails for training purposes and/or to deal with exception conditions that the algorithms couldn’t handle. 

Seriously, if you’re going to sign up for third-party services like these — even though Google does carefully vet them — you should familiarize yourself with their Terms of Service if you’re going to be concerned about these kinds of issues.

Personally, I don’t give any third parties access to my Gmail. This simplifies my Gmail life considerably. Google has excellent internal controls on user data, and I fully trust Google to handle my data with care. Q.E.D.

And by the way, if you’ve lost track of third-party systems to which you may have granted access to your Gmail or other aspects of your Google account, there’s a simple way to check (and revoke access as desired) at the Google link:

But really, if you don’t want third parties reading your Gmail, just don’t sign up with those third parties in the first place!

Be seeing you.


Why Google Needs a “User Advocate at Large”

For many years I’ve been promoting the concept of an “ombudsman” to help act as an interface between Google and its user community. I won’t even bother listing the multitude of my related links here, they’re easy enough to find by — yeah, that’s right — Googling for them.

The idea has been to find a way for users — Google’s customers who are increasingly dependent on the firm’s services for an array of purposes (irrespective of whether or not they are “paying” users) — to have a genuine “seat at the table” when it comes to Google-related issues that affect them.

My ombudsmen concepts have consistently hit a figurative brick wall at the Googleplex. A concave outline of my skull top is probably nearly visible on the side of Building 43 by now.

Who speaks for Google’s ordinary users? That’s the perennial question as we approach Google’s 20th birthday, almost exactly two months from now.

Google’s communications division speaks mainly to the press. Google developer and design advocates help to make sure that relevant developer-related parties are heard by Google’s engineering teams. 

But beyond these specific scopes, there really aren’t user advocates per se at Google. In fact, a relevant Google search yields entries for Google design and developer advocates, and for user advocates at other firms. But there’s no obvious evidence of dedicated user advocate roles at Google itself.

Words matter. Precision of word choices matters. And in thinking about this recently, I’ve realized that my traditional use of the term “ombudsman” to address these concerns has been less than optimal.

Part of the reason for this is that the concept of “ombudsman” (which can be a male or female role, of course) carries with it a great deal of baggage. I realized this all along and attempted to explain that such roles were subject to definition within any given firm or other organization. 

But ombudsman is a rather formal term and is frequently associated with a person or persons who mainly deal with escalated consumer complaints, and so the term tends to carry an adversarial implication of sorts. The word really does not encompass the broader meanings of advocacy — and other associated communications between firms and users — that I’ve been thinking about over the years — but that I’ve not been adequately describing. I plead guilty.

“User advocacy” seems like a much more accurate term to approach the concepts that I’ve been so long discussing about Google and its everyday users.

Advocacy, not contentiousness. Participation, not confrontation. 

While it would certainly be possible to have user advocates focused on specific Google products and services, the multidisciplinary nature of Google suggests that an “at large” user advocate, or a group of such advocates working to foster user communications across a wide range of Google’s teams, might be more advantageous all around.

Google and Googlers create excellent services and products. But communications with users continues to be Google’s own Achilles’ heel, with many Google-related controversies based much more on misunderstandings than on anything else.

A genuine devotion to user advocacy, fostered by Googlers dedicated to this important task, could be great for Google’s users and for Google itself.


Google’s New Security Warning Is Terrifying Many Users

I’ve been getting email from people all over the world who are suddenly scared of accessing particular websites that they’ve routinely used. It was quickly obvious what is going on — the first clue was that they were all users running Chrome Beta. 

The problem: Google’s new “Not Secure” warning on sites not using https security is terrifying many people. Users are incorrectly (but understandably) interpreting “Not Secure” to mean “Dangerous and Hacked! Close this page now!”

And this is squarely Google’s fault.

Years ago, I predicted this outcome. 

Though I’ve long promoted the migration to secure Web connections via https, I’ve also repeatedly warned that there are vast numbers of widely referenced sites that provide enormous amounts of important information to users, often from archives and systems that have been running for many, many years — sometimes since before the beginnings of Google 20 years ago.

The vast majority of these sites don’t require login. They don’t request information from users. They are utterly read-only.

While non-encrypted connections to them are theoretically subject to man-in-the-middle attacks, the real world likelihood of their being subjected to such attacks is extraordinarily low.

Another common factor with many of these sites is that they are operating on a shoestring, often on donated resources, without the expertise, money, or time to convert to https. Many of these systems are running very old code, conversion of which to support https would be a major effort — even if someone were available to do the work.

Despite ongoing efforts by “Let’s Encrypt” and others to provide tools to help automate the transition to https, the reality is that it’s still usually a massive amount of work requiring serious expertise, for all but the smallest and simplest of sites — and even that’s for sites running relatively current code.

Let’s be utterly clear about this. “Not Secure” does not mean that a site is actually hacked or dangerous in any way, nor that its data has been tampered with in transit. 

But to many users — not all of whom are well versed on the fine points of Internet security, eh? — that kind of warning displayed in that manner is a guarantee of more unnecessary confusion and angst among large categories of users, many of whom are already feeling disadvantaged by other aspects of the Web, such as Google’s continuing accessibility failures in terms of readability and other user interface aspects, disproportionately affecting these growing classes of users.

With Google about to promote their “Not Secure” warning from Google Beta to the standard Google Stable that most people run, these problems are about to grow by orders of magnitude.

Through their specific interface design decisions in this regard, Google is imposing an uncompensated cost on many sites with extremely limited resources, a cost that could effectively kill them.

Might doesn’t always make right, and Google needs to rethink the details of this particular approach.


FedEx to Anyone With Less than Perfect Vision: GO TO HELL!

It appears that shipping giant FedEx has joined the “Google User Interface Club” and introduced a new package tracking user interface designed to tell anyone with less than very young, very excellent vision that they can just go take a giant leap and are not desirable as customers — either to send or receive packages via FedEx.

As you can see in the screenshot at the end of this post (if you can actually see FedEx’s incredibly low contrast fonts that is — trust me, they are actually there!), FedEx has transitioned from their traditional easy to read interface to the new “Google Standard” interface — low contrast fonts that are literally impossible for many people to read, and extremely difficult to read for many others without Superman-grade vision.

I’ve written about these kinds of accessibility failures many times in the past — I suspect that some of them may rise to the level of violations of the Americans with Disabilities Act (ADA). They are designed to look pretty — and technically may even meet some minimum visibility standards if you have a nice, new, perfectly adjusted screen. But it doesn’t take a computer scientist to realize that their real world readability is a sick joke, a kick in the teeth to anyone with aging eyes or otherwise imperfect vision.

The U.S. Postal Service recently moved their tracking interface in this same direction, and while theirs is bad, it’s not quite as much of an abomination as this new FedEx monstrosity.

Google pushed this trend along, with many of their relatively recent interfaces representing design nightmares in terms of readability and usability for users who are apparently not in Google’s “we really care about you” target demographics. Google’s recent refresh of Gmail has been a notable and welcome exception to this trend. I’m hoping that they will continue to move in a positive direction with other upcoming interfaces, though frankly I’m not holding my breath quite yet.

In the meantime, it’s FedEx who deserves an immediate kick in the teeth. Shame on you, FedEx. For shame.


How the Pentagon Is Trying to Shame Google and Googlers

I hadn’t been planning to say much more right now about Google and “Project Maven” — the Defense Department project in which Google will wisely not be renewing participation when the existing contract ends next year (

But as usual, the Pentagon just doesn’t know when to leave well enough alone, and I am very angry today to see that a Pentagon-affiliated official is attempting to “death shame” Google and its employees regarding their appropriate decision not to renew with Maven. 

This particularly upsets me because I’ve been to this rodeo before. Over the years I’ve turned down potential work — that I really could have used! — because of its direct relationship to actual battlefield operations. And in various of those cases, there were attempts made to “death shame” me as well — to tell me that if I refused to participate in those aspects of the military-industrial complex, I would be morally complicit for any potential U.S. forces deaths that might theoretically occur due to lack of my supposed expertise.

This is a technique of the military that is as old as civilization. Various technologists reaching back to the days of Mesopotamia — and likely earlier — have been asked (or been required, often under threat of death) — to provide their services for ongoing military operations.

What makes this so difficult is that typically it’s impossible to clearly separate defensive from offensive projects. As I’ve previously noted, all too often what appears to be defensive work morphs into attack systems, and in the hands of some leaders (especially lying, sociopathic ones) can easily end up extinguishing vast numbers of innocent lives.

This was explicitly acknowledged in the infuriating words earlier today by a former top U.S. Defense Department official — former Deputy Defense Secretary Robert O. Work, who initiated Project Maven:

“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves.”

He also suggested that Google was being hypocritical, because in his view their AI research cooperation with China would benefit China’s military.

His statements are textbook Pentagon doublespeak, and his assertions are not only fundamentally disingenuous, but are blatant attempts at false equivalences.

Particularly galling is his “might wind up with us taking a shot” reference, as if to say that offensive operations were merely a minor footnote in the battle plan. But when you’re dealing with operational battle data, there are no minor footnotes in this context — that data analysis will be used for offensive operations — you can count on it.

To be clear, the righteous defense of the USA is an admirable pursuit. But if one chooses to go all in with the military-industrial complex to that end, it’s at the very least a decision to be made with “eyes wide open” — not with false assumptions that your work will be purely defensive.

And for those of us who refuse to work on military projects that will ultimately be used offensively — keeping in mind the horrific missteps of presidents far less twisted and bizarre than the one currently in the Oval Office — there is absolutely no valid shame associated with that ethical decision.

There’s a critical distinction to be made between basic research and operational battle projects. It’s much the same distinction as my willing work on the DoD ARPANET project decades ago — that led directly to the Internet that you’re using right now — vs. a range of ongoing, specifically battle-oriented projects with which I refused to become associated.

This is also what gives the lie to Robert Work’s attempt to discredit Google’s AI work with China. Open AI research is like Open Source software itself — usable for good or evil, but open to all and light years away from projects primarily with battle intents.

Google and other firms — including their managements and employees — will of course need to find their own paths forward in term of what sorts of work and contracts they are willing to pursue that may involve the Department of Defense or other military-associated organizations. As we’ve seen with ARPANET, some basic research work funded by the military can indeed yield immense positive benefits to the country and the world.

Personally, I find the concept of a dividing line between such basic research — as opposed to clearly battle-oriented projects — to be a useful guide toward determining which sorts of projects meet my own ethical standards — and which ones do not. As the saying goes, your mileage may vary.

But in any case, we should all utterly ignore Robert Work’s repulsive attempt to shame Googlers and Google itself — and relegate his way of thinking to the dustbin of history where it truly belongs.


How the Dominant ISPs Are Trying to Scare People Into Opposing California Net Neutrality

Are there any sordid depths to which the crooked, lying, dominant ISPs won’t go to try terrify people into opposing Net Neutrality in California? Nope, let’s face it, these firms spout outright lies as if they were Donald Trump. Yep, seriously evil, as this robocall voicemail currently in circulation so clearly demonstrates! –


A Modest Proposal: Identifying Europeans on the Internet for Their Protection

With European politicians and regulators continuing to churn out proposed regulations to protect their citizens from the evils of the Internet, via “The Right To Be Forgotten” — and the currently under consideration Article 11 “link tax” and Article 13 content filtering censorship proposals — it is becoming more important than ever that Internet sites around the world be able to identify European users so that they may be afforded “appropriate” treatment at those sites, including blocking from all services as necessary.

Already, some Europeans are suggesting that they will attempt to evade the restrictions that have been implemented or proposed by their beneficent and magnificent leaders. The world must band together to prevent Europeans users from pursuing such a tragic course of actions.

Obviously, all VPN usage by Europeans that attempt to obscure the European geographic locations of their source IP addresses must be banned. In fact, it would be even safer for Europeans if all usage of VPNs by Europeans were prohibited by their governments, except under extraordinary circumstances requiring government licenses and monitoring for inappropriate usage.

All web browsers used by Europeans should be required to send a special “protected European resident” flag to server sites, so that those sites may determine the appropriate blocking or other disposition of those browser requests. Use of unapproved browsers or tampering with browsers to remove this protection flag would of course be a criminal act.

We must also solve the problem of Europeans traveling outside of Europe, where they might be tempted to use public Internet access systems that do not meet the high standards of protection required by European regulations.

One possible solution to this dilemma would be to require the permanent implantation of RFID identification capsules in all Europeans who travel beyond the protected confines of Europe. Don’t worry — these need not individually identify any given person, they need only identify them as European. Scanning equipment at public computers around the planet could detect these implants and automatically apply appropriate European protection rules. Europeans would be free to travel the world with no fears of accidentally using systems that did not apply their government’s protective regulations!

This modest proposal of course only scratches the surface of the sorts of solutions that will be needed to help assure that EU citizens fully and completely abide by their governments’ benevolent actions and requirements.

But the EU and its residents can feel confident that the rest of the world’s Internet will do its part to help keep Europeans safe, secure, and law-abiding at all times!


Google’s New AI Principles Are a Model for the World

In the wake of Google’s announcement that they will not be renewing their controversial “Project Maven” military AI contract when it expires next year (“Google — and the Defense Department’s Disturbing ‘Maven’ A.I. Project Presentation Document” –, Google has now published a post describing their policy positions regarding AI at Google going forward: “Artificial Intelligence at Google: Our Principles” (

Since I was on balance critical of Google’s participation in Project Maven, but am very supportive of AI overall (“How AI Could Save Us All” –, I’ve received a bunch of queries from readers asking how I feel about Google’s newly announced AI principles statement.

“Excellent” is my single word summary, especially in terms of the principles being balanced — and above all — realistic.

AI will be a critical tool going forward, both in terms of humanity and the global ecosystem itself. And like any tool — reaching all the way back to a chunk of rock on the ground in a prehistoric cave — AI can be used for good purposes, evil purposes, and in a range of “gray area” scenarios that are more difficult to cleanly categorize one way or the other.

It’s this last set of concerns, especially AI applications with multiple uses, that I’m particularly glad to see Google addressing specifically in their principles post.

For those of us who aren’t psychopaths or sociopaths, most fundamental characteristics of good and evil are usually fairly obvious. But as one grows older, it becomes apparent that the real world is not typically made up of black and white situations where one or another set of these characteristics exist in isolation — much more often we’re dealing with a complicated kaleidoscope of interrelating issues.

So — to address one point that I’ve been most asked about over the last couple of days regarding Google’s AI statement — it is entirely appropriate that Google explicitly notes that they will not be abandoning all aspects of government and military AI work, so long as that work is not likely to cause overall harm. 

In a “perfect” world we might not need the military — hell, we might not even need governments. But this is not a perfect world, and it’s one thing to use AI as a means to kill ever more people more efficiently, and something else entirely to use AI defensively to help protect against the genuine evils that still pervade this planet, as Google says it will do.

AI is still in its relative infancy, and attempts to accurately predict its development (beyond the very short term) are likely doomed to failure. AI principles such as Google’s will always by necessity be works in progress, and Google in fact explicitly acknowledges this fact.

But ethical firms and ethical governments around the world could today do themselves, their employees, and their citizens proud by accepting and living by AI principles such as those that Google has now announced.


Why We May Have to Cut Europe Off from the Internet

UPDATE (March 28, 2019): Early this week, the EU passed this horrific legislation into law. How the individual member countries of the EU will implement it is anyone’s guess — utter chaos is certain, and drastic measures by the rest of the world to protect their own Internet services and users from such EU madness will indeed likely be necessary.

UPDATE (July 5, 2018): In a rare move, the EU Parliament voted today to block this current copyright legislation, which opens it to amendments by the entire membership of Parliament, leading to a new vote this coming September. So the war against this horrific legislation is by no means over, but this is an important battle won for now.

– – –

It’s no joke. It’s not hyperbole. If the European Union continues its current course, the rest of the world may well have to consider how to effectively “cut off” Europe from the rest of the Internet — to create an “Island Europe” in an Internet communications context. 

For those of us involved with the Net since its early origins, the specter of network fragmentation has long been an outcome that we’ve sorely hoped to avoid. But continuing EU actions could create an environment where mechanisms to tightly limit Europe’s interactions with the rest of the global Internet may be necessary — not imposed with pleasure, not with vindictiveness, but for the protection of free speech around the rest of the planet.

The EU will later this month be voting on a nightmarish copyright control scheme (“Article 13”) that would impose requirements for real-time “copyright filtering” of virtually all content uploaded to major and many minor Internet sites, with no protections against trolling, and the certainty of inappropriately blocking vast quantities of public domain and other materials, with no real protections against errors and no effective avenues for appeals. Please see:

“On June 20, an EU committee will vote on an apocalyptically stupid, internet-destroying copyright proposal that’ll censor everything from Tinder profiles to Wikipedia” (

Even if this specific horrific proposal is voted down, it’s important to review how we came to this juncture, as the EU has increasingly accelerated its program to become the Internet’s global censorship czar, in ways that even countries like China and Russia haven’t attempted to date.

As far back as 2012 and earlier, in “The ‘Right to Be Forgotten’: A Threat We Dare Not Forget” (, I warned of the insidious nature of content censorship schemes flowing forth from Europe, and I’ve consistently warned that — like the proverbial camel’s nose under the tent — Europe would never be satisfied with any concessions offered by Internet firms. 

Time has borne out my predictions. In ensuing years, the EU has expanded its demands until now it considers itself in key respects to be the global arbiter of what should or should not be seen by Internet users around the world. 

Like other of civilization’s information control tyrants, a taste of censorship powers by the EU has inevitably led to utter censorship gluttony, and the sense that “we know best what those stupid little people should be allowed to see” is as old as human history, long predating modern communications systems.

European citizens are of course free to elect whatever sorts of governments that they choose. If that choice is for information control tyrants whose pleasure is to victimize their own citizens, so be it.

But if Europe continues to insist that its tyranny of censorship and information control must be honored by the rest of the world, then the rest of the world will be reluctantly forced to treat Europe as an Internet pariah, and use all possible technical means to isolate Europe in manners that best protect everyone else’s freedom of Internet speech. 


When Google Blames Users for Privacy Problems

One of my favorite user interface (UI) design adages is pretty much simplicity itself:

When you blame the users, you’ve already lost the argument.

I’m reminded of this by Google’s public reactions to a recent study revealing that almost a third of nearly 10,000 sampled Google G Suite commercial customers were unwittingly exposing sensitive corporate and/or customer data to the public Internet without access protections: “Widespread Google Groups Misconfiguration Exposes Sensitive Information” (

Without getting into the technical details here, the underlying issues relate to the multiplicity of settings that control public access to Google Groups and their associated mailing lists. While Google defaults these to their most secure settings, the sheer quantity of misconfigured, potentially information leaking sites represents an empirical proof that a very significant number of G Suite users and administrators are not adequately understanding these settings, with resulting privacy-negative impacts.

Google’s response — in essence — has been “RTFM” (Read The F‑‑‑‑‑‑ Manual): The settings are there, if you’re not using them correctly, that’s your problem, not ours!

And while Google has posted some additional related info (e.g. on their G Suite Updates Blog), those explanations mostly stand to emphasize the relative complexity of the interface, and no changes that I’m aware of have been made to the interface in response to these concerns.

The situation is a bit reminiscent of auto manufacturers who resisted redesigning key aspects of their vehicles, even as it became ever more obvious that significant numbers of drivers were having accidents due to existing design elements.

As far as I’m concerned, the scope of the reported G Suite privacy leakage problems indicates nothing less than a privacy design failure in this instance.

Rather than trying to make excuses for an existing user interface that is clearly failing significant numbers of customers (and with G Suite, we’re talking about paying customers!), Google needs to take an immediate and hard look at the specific design aspects that are enabling these misconfiguration-based confidential information exposures.

A practical fix might not even involve major changes to the UI, and might be adequately served by mechanisms as simple as more in-your-face “pop-up” warnings to users and administrators, appearing in conjunction with additional confirmation dialogues when associated privacy-sensitive settings are being altered.

But clearly, explanatory blog posts aren’t going to cut the mustard for these kinds of problems, and I urge Google’s world-class privacy team to effectively address this situation as soon as possible.


Hate Speech — and Google’s Public Relations “Death Wish”

I’ve been writing publicly for a long time. Sometimes it feels like my earliest articles and posts were composed in runic alphabets inscribed on stone tablets. I’ve always had a rule that I’ve tried to abide by: “Never write when you’re angry!”

Today I’ll violate that self-imposed prohibition. I’m in a vile mood, and I’m here at the keyboard anyway.

Those of you who have followed my writings (and have still somehow managed to maintain a semblance of sanity) know that I frequently deal with Google-related issues. I started doing that shortly after Google first appeared on the Net, and here we are now almost 20 years later. 

I was pretty tough on Google back then. I was unhappy with their privacy practices at the time and some other related issues, and I was not reluctant to present my feelings about such matters. Similarly, as Google evolved over the years into a world-class example of privacy and security best practices and has done so much other good work, I’ve enthusiastically pointed out the efforts of the Google teams involved. And when I feel that Google has screwed up regarding something these days, I point that out directly as well.

My policy of always trying to honestly write about issues using a “call ’em as I see ’em” philosophy has left a lot of partisans unhappy on both sides of the political spectrum, who view any variance from “the party line” on any given matter to be both dangerous and intolerable.

This has been a reality to one extent or another since the earliest ARPANET days when I first began publicly posting, but has in recent years blown up into an orders of magnitude more vicious state of affairs.

For example, late last week I spoke about Google on a national radio venue where I very frequently guest, and pushed  back against the false claims of some national GOP politicians, who were again parroting the Big Lie that Google purposely suppressed and undermined conservative viewpoints (the trigger this time was a search results Knowledge Panel error due to a defaced Wikipedia source page).

I’m usually happy to do this — I get paid nuthin’ for these appearances — I value the opportunity to speak some truth before these very large audiences that all too often are trapped in propagandistic, anti-technology filter bubbles where outright lies about firms like Google are common currency.

It’s gratifying to so frequently the next day get emails that say variations of “Thanks for that — nobody ever explained it to me that way before!”

But over time, and especially since the 2016 elections, the worst aspects of our toxic political environment have been contaminating more and more of these discussions, to the extent that my on-air comments supporting Google last week — perhaps because Donald Trump, Jr. was involved — have triggered a hate speech campaign that is rather sickening to behold. 

This has happened before — and I have a pretty thick skin.

Yet this time it feels different. I find myself wondering why the blazes I keep sticking my neck out this way. This isn’t my job. I don’t get paid for anything I write or say these days — I’m long term unemployed and try to get by with whatever sporadic and limited consulting I can dredge up from time to time.

More to the point, one wonders — especially with so much at stake — why Google isn’t taking a more proactive stance to protect the company, their employees, and the global community that depends on them — from the ongoing torrent of politically-motivated lies and attacks that are clearly designed to set the stage for broad censorship and government micromanagement of data for political purposes! Why doesn’t Google have employees out there doing what I’m doing? Why does Google continue to create a vacuum through their silence, a vacuum that haters fill with outright lies that most onlookers have no simple way to differentiate from the truth?

Of course we already know part of the answer. Google is famously terrified of the so-called “Streisand Effect” — the fear that even retorting lies will lend credence and more attention to them.

20 or even arguably 10 years ago, this might on balance have been a reasonable philosophy for Google to practice as a cautious firm.

But today, I’m increasingly convinced that Google’s not fighting back against these lies in every possible legitimate way amounts to a kind of corporate “death wish” that is ultimately putting everything good that Google has built and stands for at terrible risk.

And if Google loses this war, we all lose.

Governments, politicians, and other entities (including not only the alt-right but also many elements of more conventional left and right-wing politics as well), are using Google’s reticence for battle as a green light for the acceleration of anti-Google efforts to push intolerant information-control agendas on national, transnational, and global scales.

If such forces succeed in decimating Google in the manners that are being postulated, the results could be catastrophic for free speech around the planet.

Knowing Googlers as I do, it seems certain that most of them see these dangers very clearly from the inside — yet the “death wish” in terms of how Google actually communicates with the outside world seems more encompassing than ever.

This makes me very sad — and as I said above very angry as well.

The deep, dank pit looms before us, and the razor sharp blade of the pendulum descends closer with every tick of the clock. Either we deal with these issues seriously and effectively now, or very soon we’ll find that our wonderful hoped-for tomorrows have turned into nothing but a putrid, rotting pile of wasted yesterdays.

And that’s the truth.


Google — and the Defense Department’s Disturbing “Maven” A.I. Project Presentation Document

UPDATE (June 1, 2018): Google has reportedly announced that it will not be renewing the military artificial intelligence contract discussed in this post after the contract expires next year, and will shortly be unveiling new ethical guidelines for the use of artificial intelligence systems. Thanks Google for doing the right thing for Google, Googlers, and the community at large.

– – –

A few months ago, in “The Ethics of Google and the Pentagon Drones” – – I discussed some of the complicated nuances that can come into play when firms like Google engage with military contracts that are ostensibly for defensive purposes, but potentially could lead to offensive use of artificial intelligence technologies as well. This is not a simple matter. I was myself involved with Defense Department projects many years ago (including the Internet’s ancestor ARPANET itself), as I explained in that post.

The focal point for concerns inside Google in this regard (triggering significant internal protests and some reported resignations) revolve around the U.S. Department of Defense (DoD) “Project Maven” — aimed at using A.I. technology for drone image analysis, among other possibilities.

Now a 27 page DoD presentation document regarding Maven is in circulation, and frankly it is discomforting and disturbing to view. It is officially titled:

“Disruption in UAS: The Algorithmic Warfare Cross-Functional Team (Project Maven)”

And it sends a chill down my spine precisely because it seems to treat the topic rather matter-of-factly, almost lightheartedly.

There are photos of happy surfers. The project patch features smiling, waving cartoon robots who would fit right into an old episode of “The Jetsons” — with a Latin slogan that roughly translates to “Our job is to help.” Obviously DoD has learned a lesson from that old NSA mission patch that showed an enormous octopus with its tentacles draped around the Earth.

You can see the entire document here:

I stand by my analysis in my post referenced above regarding the complicated dynamics of such projects and their interplay with technology firms such as Google.

However, after viewing this entire Project Maven document, I have a gut feeling that long-term participation in this project will not turn out well for Google overall.

To be sure, there will likely be financial gains related to resources provided to DoD for this project — but at the cost of how much good will inside the company among employees, and in terms of potentially negative impacts on the firm’s public image overall?

Certainly the argument could be made that it’s better that a firm with an excellent ethical track record like Google participate in such projects, rather than only traditional defense contractors — some of whom have a long history of profiting from wars with little or no regard for ethical considerations.

But over the years I’ve seen good guys get trapped by that kind of logic, and once deeply immersed in the battlefield military-industrial complex it can be difficult to ever extricate yourself, irrespective of good intentions.

Thankfully from my standpoint, this isn’t a decision that I have to make. But while I don’t claim to have a functional crystal ball, I’ve been around long enough that my gut impressions regarding situations like this have a pretty good track record.

I sincerely hope that Google can successfully find its way through this potential minefield. For a great company like Google with so many great employees, it would be a tragedy indeed if issues like those related to Project Maven did serious damage to Google and to relationships with Googlers going forward.