FedEx to Anyone With Less than Perfect Vision: GO TO HELL!

It appears that shipping giant FedEx has joined the “Google User Interface Club” and introduced a new package tracking user interface designed to tell anyone with less than very young, very excellent vision that they can just go take a giant leap and are not desirable as customers — either to send or receive packages via FedEx.

As you can see in the screenshot at the end of this post (if you can actually see FedEx’s incredibly low contrast fonts that is — trust me, they are actually there!), FedEx has transitioned from their traditional easy to read interface to the new “Google Standard” interface — low contrast fonts that are literally impossible for many people to read, and extremely difficult to read for many others without Superman-grade vision.

I’ve written about these kinds of accessibility failures many times in the past — I suspect that some of them may rise to the level of violations of the Americans with Disabilities Act (ADA). They are designed to look pretty — and technically may even meet some minimum visibility standards if you have a nice, new, perfectly adjusted screen. But it doesn’t take a computer scientist to realize that their real world readability is a sick joke, a kick in the teeth to anyone with aging eyes or otherwise imperfect vision.

The U.S. Postal Service recently moved their tracking interface in this same direction, and while theirs is bad, it’s not quite as much of an abomination as this new FedEx monstrosity.

Google pushed this trend along, with many of their relatively recent interfaces representing design nightmares in terms of readability and usability for users who are apparently not in Google’s “we really care about you” target demographics. Google’s recent refresh of Gmail has been a notable and welcome exception to this trend. I’m hoping that they will continue to move in a positive direction with other upcoming interfaces, though frankly I’m not holding my breath quite yet.

In the meantime, it’s FedEx who deserves an immediate kick in the teeth. Shame on you, FedEx. For shame.


How the Pentagon Is Trying to Shame Google and Googlers

I hadn’t been planning to say much more right now about Google and “Project Maven” — the Defense Department project in which Google will wisely not be renewing participation when the existing contract ends next year (

But as usual, the Pentagon just doesn’t know when to leave well enough alone, and I am very angry today to see that a Pentagon-affiliated official is attempting to “death shame” Google and its employees regarding their appropriate decision not to renew with Maven. 

This particularly upsets me because I’ve been to this rodeo before. Over the years I’ve turned down potential work — that I really could have used! — because of its direct relationship to actual battlefield operations. And in various of those cases, there were attempts made to “death shame” me as well — to tell me that if I refused to participate in those aspects of the military-industrial complex, I would be morally complicit for any potential U.S. forces deaths that might theoretically occur due to lack of my supposed expertise.

This is a technique of the military that is as old as civilization. Various technologists reaching back to the days of Mesopotamia — and likely earlier — have been asked (or been required, often under threat of death) — to provide their services for ongoing military operations.

What makes this so difficult is that typically it’s impossible to clearly separate defensive from offensive projects. As I’ve previously noted, all too often what appears to be defensive work morphs into attack systems, and in the hands of some leaders (especially lying, sociopathic ones) can easily end up extinguishing vast numbers of innocent lives.

This was explicitly acknowledged in the infuriating words earlier today by a former top U.S. Defense Department official — former Deputy Defense Secretary Robert O. Work, who initiated Project Maven:

“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves.”

He also suggested that Google was being hypocritical, because in his view their AI research cooperation with China would benefit China’s military.

His statements are textbook Pentagon doublespeak, and his assertions are not only fundamentally disingenuous, but are blatant attempts at false equivalences.

Particularly galling is his “might wind up with us taking a shot” reference, as if to say that offensive operations were merely a minor footnote in the battle plan. But when you’re dealing with operational battle data, there are no minor footnotes in this context — that data analysis will be used for offensive operations — you can count on it.

To be clear, the righteous defense of the USA is an admirable pursuit. But if one chooses to go all in with the military-industrial complex to that end, it’s at the very least a decision to be made with “eyes wide open” — not with false assumptions that your work will be purely defensive.

And for those of us who refuse to work on military projects that will ultimately be used offensively — keeping in mind the horrific missteps of presidents far less twisted and bizarre than the one currently in the Oval Office — there is absolutely no valid shame associated with that ethical decision.

There’s a critical distinction to be made between basic research and operational battle projects. It’s much the same distinction as my willing work on the DoD ARPANET project decades ago — that led directly to the Internet that you’re using right now — vs. a range of ongoing, specifically battle-oriented projects with which I refused to become associated.

This is also what gives the lie to Robert Work’s attempt to discredit Google’s AI work with China. Open AI research is like Open Source software itself — usable for good or evil, but open to all and light years away from projects primarily with battle intents.

Google and other firms — including their managements and employees — will of course need to find their own paths forward in term of what sorts of work and contracts they are willing to pursue that may involve the Department of Defense or other military-associated organizations. As we’ve seen with ARPANET, some basic research work funded by the military can indeed yield immense positive benefits to the country and the world.

Personally, I find the concept of a dividing line between such basic research — as opposed to clearly battle-oriented projects — to be a useful guide toward determining which sorts of projects meet my own ethical standards — and which ones do not. As the saying goes, your mileage may vary.

But in any case, we should all utterly ignore Robert Work’s repulsive attempt to shame Googlers and Google itself — and relegate his way of thinking to the dustbin of history where it truly belongs.


How the Dominant ISPs Are Trying to Scare People Into Opposing California Net Neutrality

Are there any sordid depths to which the crooked, lying, dominant ISPs won’t go to try terrify people into opposing Net Neutrality in California? Nope, let’s face it, these firms spout outright lies as if they were Donald Trump. Yep, seriously evil, as this robocall voicemail currently in circulation so clearly demonstrates! –


A Modest Proposal: Identifying Europeans on the Internet for Their Protection

With European politicians and regulators continuing to churn out proposed regulations to protect their citizens from the evils of the Internet, via “The Right To Be Forgotten” — and the currently under consideration Article 11 “link tax” and Article 13 content filtering censorship proposals — it is becoming more important than ever that Internet sites around the world be able to identify European users so that they may be afforded “appropriate” treatment at those sites, including blocking from all services as necessary.

Already, some Europeans are suggesting that they will attempt to evade the restrictions that have been implemented or proposed by their beneficent and magnificent leaders. The world must band together to prevent Europeans users from pursuing such a tragic course of actions.

Obviously, all VPN usage by Europeans that attempt to obscure the European geographic locations of their source IP addresses must be banned. In fact, it would be even safer for Europeans if all usage of VPNs by Europeans were prohibited by their governments, except under extraordinary circumstances requiring government licenses and monitoring for inappropriate usage.

All web browsers used by Europeans should be required to send a special “protected European resident” flag to server sites, so that those sites may determine the appropriate blocking or other disposition of those browser requests. Use of unapproved browsers or tampering with browsers to remove this protection flag would of course be a criminal act.

We must also solve the problem of Europeans traveling outside of Europe, where they might be tempted to use public Internet access systems that do not meet the high standards of protection required by European regulations.

One possible solution to this dilemma would be to require the permanent implantation of RFID identification capsules in all Europeans who travel beyond the protected confines of Europe. Don’t worry — these need not individually identify any given person, they need only identify them as European. Scanning equipment at public computers around the planet could detect these implants and automatically apply appropriate European protection rules. Europeans would be free to travel the world with no fears of accidentally using systems that did not apply their government’s protective regulations!

This modest proposal of course only scratches the surface of the sorts of solutions that will be needed to help assure that EU citizens fully and completely abide by their governments’ benevolent actions and requirements.

But the EU and its residents can feel confident that the rest of the world’s Internet will do its part to help keep Europeans safe, secure, and law-abiding at all times!


Google’s New AI Principles Are a Model for the World

In the wake of Google’s announcement that they will not be renewing their controversial “Project Maven” military AI contract when it expires next year (“Google — and the Defense Department’s Disturbing ‘Maven’ A.I. Project Presentation Document” –, Google has now published a post describing their policy positions regarding AI at Google going forward: “Artificial Intelligence at Google: Our Principles” (

Since I was on balance critical of Google’s participation in Project Maven, but am very supportive of AI overall (“How AI Could Save Us All” –, I’ve received a bunch of queries from readers asking how I feel about Google’s newly announced AI principles statement.

“Excellent” is my single word summary, especially in terms of the principles being balanced — and above all — realistic.

AI will be a critical tool going forward, both in terms of humanity and the global ecosystem itself. And like any tool — reaching all the way back to a chunk of rock on the ground in a prehistoric cave — AI can be used for good purposes, evil purposes, and in a range of “gray area” scenarios that are more difficult to cleanly categorize one way or the other.

It’s this last set of concerns, especially AI applications with multiple uses, that I’m particularly glad to see Google addressing specifically in their principles post.

For those of us who aren’t psychopaths or sociopaths, most fundamental characteristics of good and evil are usually fairly obvious. But as one grows older, it becomes apparent that the real world is not typically made up of black and white situations where one or another set of these characteristics exist in isolation — much more often we’re dealing with a complicated kaleidoscope of interrelating issues.

So — to address one point that I’ve been most asked about over the last couple of days regarding Google’s AI statement — it is entirely appropriate that Google explicitly notes that they will not be abandoning all aspects of government and military AI work, so long as that work is not likely to cause overall harm. 

In a “perfect” world we might not need the military — hell, we might not even need governments. But this is not a perfect world, and it’s one thing to use AI as a means to kill ever more people more efficiently, and something else entirely to use AI defensively to help protect against the genuine evils that still pervade this planet, as Google says it will do.

AI is still in its relative infancy, and attempts to accurately predict its development (beyond the very short term) are likely doomed to failure. AI principles such as Google’s will always by necessity be works in progress, and Google in fact explicitly acknowledges this fact.

But ethical firms and ethical governments around the world could today do themselves, their employees, and their citizens proud by accepting and living by AI principles such as those that Google has now announced.