Google Admits It Has Chinese Censorship Search Plans – What This Means

This post is also available in Google Docs format .

After a painfully long delay, Google admitted at an internal company-wide meeting yesterday that it indeed has a project (reportedly named “Dragonfly”) for Chinese government-controlled censored search in China, but asserts that it is nowhere near ready for deployment and is subject to a range of possible changes before deployment (I’ll add, assuming that it ever actually launches).

Some background:

“Google Must End Its Silence About Censored Search in China” – https://lauren.vortex.com/2018/08/09/google-must-end-its-silence-about-censored-search-in-china

“Google Haters Rejoice at Google’s Reported New Courtship of China” –  https://lauren.vortex.com/2018/08/03/google-haters-rejoice-at-googles-reported-new-courtship-of-china

“Censored Google Search for China Would Be Both Evil and Dangerous!” – https://lauren.vortex.com/2018/08/01/censored-google-search-for-china-would-be-both-evil-and-dangerous

While this was an internal meeting, it apparently leaked publicly in real time, and was reportedly terminated earlier than planned when it was realized that it was being live-tweeted to the public by somebody watching the event.

The substance of the discussion is unlikely to appease Googlers upset by these plans. For all practical purposes, management appears to be justifying the new project using much the same terms (e.g., some Google is better than no Google”) used to try justify the ill-fated 2006 entry of Google into censored Chinese search, which Google abandoned in 2010 after continuing escalation of demands by the Chinese government, and Chinese government hacking of Google systems.

Given the rapid recent escalation of Internet censorship and associated human rights abuses by China’s “President for Life” Xi, there’s little reason to expect the results to be any different this time around — in fact they’re likely to go bad even more quickly, making Google by definition complicit in the human rights abuses that flow from the Chinese government’s censorship regime.

The secrecy surrounding this project — few Googlers even knew of its existence until leaks began circulating publicly — was explained by Google execs as “typical” of various Google projects while in their early, very sensitive stages.

This alone suggests a serious blind spot in Google management’s analysis. Such logic might hold true for a “run-of-the-mill” new service. But keeping a project such as Chinese censored search under such wraps within the company — a project with vast ethical ramifications — is positively poisonous to internal company trust and moral when the project eventually leaks out — as we’ve seen so dramatically demonstrated in this case.

That’s why the (now public) Googler petition — reportedly signed by well over a 1000 Googlers and increasing — is so relevant and important. It wisely calls for the establishment of formal frameworks inside Google to deal with these kinds of ethical issues, giving rank and file employees a “seat at the table” for such discussions. 

It also notably calls for the creation of internal “ombudspersons” roles to be directly engaged in these corporate ethical considerations — something that I’ve been publicly and privately advocating to Google over at least the last 10 years.

Irrespective of whether or not Google relaunches Chinese-government controlled censored search, the kinds of efforts proposed in the Googler petition would be excellent steps toward the important goal of improving Google’s ethical framework for dealing with both controversial and more routine projects going forward.

Leaks threaten the culture of internal openness that has been an important hallmark at Google since its creation 20 years ago (with this new Chinese government-censored search project being an obvious and ironic exception to Google’s open internal culture).

This internal openness is crucial not only for Google, but also for its users and the community at large as well. Vibrant open discussion internally at Google (which I’ve witnessed and participated in myself when I consulted to them a number of years ago) is what helps to make Google’s products and services better, and helps Google to avoid potentially serious mistakes.

But for any organization, when policy-related leaks occur of the sort that we’ve witnessed recently regarding Google and China, it strongly suggests that the organization does not have well functioning or adequate internal staff-accessible processes in place to appropriately deal with these higher pressure matters. Again, the kinds of proposals in the Googler petition would go a long way toward alleviating this situation.

These recent developments have brought Google to a kind of crossroads, a “moment of truth” as it were. What is Google going to be in its next 20 years? What kinds of roles will ethics play in Google’s decisions going forward? These are complex questions without simple answers. Google has a lot of serious work ahead in answering them to their own and the public’s personal and political satisfactions. 

But Google is great at dealing with hard problems, and I believe that they’ll work their way to appropriate answers in these cases as well.

We shall see what transpires in the fullness of time.

–Lauren–

Beware the Fraudulent Blog Comments Scams!

A quick heads-up! While I’ve routinely seen these from time to time, there seems to be a major uptick in what are apparently fraudulent comment scam attempts here on my blog. They never get published since I must approve all comments before any appear, but their form is interesting and there likely is at least some human element involved, since they’re able to pass the reCAPTCHA “Are you a human?” test.

Here’s how the scams operate. It’s typical for blogs that support comments (whether moderated or not) to often permit the sender to include their name, email address, and a contact URL with their comment submission. My blog only will display their specified name, and of course only if I approve the comment.

But many blogs include all of that information in the posted comments, and many blogs don’t moderate comments, or only do so after the fact if there are complaints about individual published comments.

The scam comments themselves tend to fall into one of two categories. They may be utterly generic, e.g.: “Thanks for this great and useful post!”

Or they may be much more sophisticated, and actually refer in a more or less meaningful way — sometimes in surprising detail — to the actual topic of the original post.

The email addresses provided with the comments could be pretty much anything. What matters is the URLs that the comment authors provide and that they hope you will publish: The scammers always provide URLs pointing at various fake “technical support” addresses.

These cover the gamut: Google, Yahoo!, Microsoft, Outlook — and many more.

And you never want to click on those links, which almost inevitably lead to the kind of fake technical support sites that routinely scam unsuspecting users out of vast sums around the world every day.

It’s possible that these scam comment attempts are made in bulk by humans somewhere being paid a couple of cents per effort. Or perhaps they’re partly human (to solve the reCAPTCHA), and partly machine-generated.

In any case, if you run a blog, or some other public-facing site where comments might be submitted, watch out for these. Don’t let them appear on your sites! Your legitimate users will thank you.

–Lauren–

Fixing Google’s Gmail Spam Problems

The anti-spam methodology used by Google’s Gmail system — and most other large email processing systems — suffers a glaring flaw that unfortunately has become all too traditionally standard in email handling.

One of the most common concerns I receive from Google users is complaints that important email has gone “missing” in some mysterious manner.

The mystery is usually quickly solved — but a real solution is beyond my abilities to deploy widely on my own.

The problem is the ubiquitous “Spam” folder, a concept that has actually helped to massively increase the amount of spam flowing over the Internet.

Many users turn out to not even realize that they have a Spam folder. It’s there, but unnoticed by many.

But even users who know about the Spam folder tend to rarely bother checking it — many users have never looked inside, not even once. Google’s spam detection algorithm is so good that non-spam relatively rarely ends up in the Spam folder.

And therein lies the rub. Google’s algorithms are indeed good, but of course are not perfect. False positives — important email getting incorrectly relegated to the Spam folder — can be a really big deal — especially when important financial notifications are concerned, for example.

In theory, routine use of Gmail’s “filter” options could help to tame this problem and avoid some false positives being buried unseen. But the reality is that many of these important false positives are not from necessarily expected sources, and many users don’t know how to use the Gmail filter system — and in fact may be totally unaware of its existence. And frankly, the existing Gmail filtering user interface is not well suited to having large and growing numbers of filters of the sort needed to try deal with this situation (either from the standpoint of actual spam or false positives) — trust me on this, I’ve tried!

So could we just train users to routinely check the Spam folder for important stuff that might have gotten in there by accident? That’s a tough one, but even then there’s another problem.

Many Gmail users receive so much spam — much of it highly repetitive — that manually plowing through the Spam folder looking for false positives is necessarily time consuming and prone to the error of missing important items, no matter how careful you attempt to be. Ask me how I know!

This takes us to the intrinsic problem with the Spam folder concept. Gmail and most other major mail systems accept many of the spam emails from the creepy servers that vomit them across the Net by the billions. Then they’re relegated to users’ spam folders, where they help to bury the important non-spam emails that shouldn’t be in there in the first place.

Since Google accepts much of this spam, the senders are happy and keep sending spam to the same addresses, seemingly endlessly. So you keep seeing the same kinds of spam — ranging from annoying to disgusting — over and over and over again. The sender names may vary, the sending servers usually have obviously bogus identities, but (unlike some malware that Google rejects immediately) the spam keeps getting delivered anyway.

The solution is obvious, even though nontrivial to implement at Google Scale. It’s a technique used by many smaller mail systems — my own mail servers have been using variations of this technique for decades.

Specifically, users need to be able to designate that particular types of spam will never be delivered to them at all, not even to the Spam folder. Attempts at delivering those messages should be rejected at the SMTP server level — we can have a discussion later about the most appropriate reject response codes in these circumstances, there are various ways to handle this.

Specifying the kinds of spam messages to be given this “delivery death penalty” treatment is nontrivial, both from a user interface and implementation standpoint — but I suspect that Google’s AI resources could be of immense assistance in this context. Nor would I assert that a “real-time” reject mechanism like this would be without cost to Google — but it would certainly be immensely useful and user-positive.

The data from my own servers suggests that once you start rejecting spam email rather than accepting it, the overall level of spam attempts ultimately goes down rather than up. This is especially true if spam attempts are greeted with a “no such user” reject even when that user actually exists (yes, this is a controversial measure).

There are certainly a range of ways that we could approach this set of problems, but I’m convinced that the current technique of just accepting most spam and tossing it into a Spam folder is not helping to stop the scourge of spam, and in fact is making it far worse over time.

–Lauren–

Location Tracking: Google’s the One You DON’T Need to Worry About!

I must keep the post brief today but this needs to be said. There are a bunch of stories currently floating around in the news globally, making claims like “Google tracks your location even when you tell it not to!” and other alarming related headlines.

This is all false hype-o-rama.

Google has a variety of products that can make use of location data, both desktop and mobile, and of course there are various kinds of location data in these contexts — IP address location estimates, cell phone location data, etc. So it’s logical that these need to be handled in different ways, and that users have appropriate options for dealing with each of them in different Google services. Google explains in detail how they use this data, the tight protections they have over who can access this data — and they never sell this data to anyone. 

Google pretty much bends over backwards when it comes to describing how this stuff works and the comprehensive controls that users have over data collection and deletion (see: “The Google Page That Google Haters Don’t Want You to Know About” – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about).

Can one argue that Google could make this even simpler for users to deal with? Perhaps, but how to effectively make it all even simpler than it is now in any kind of practical way is not immediately obvious.

The bottom line is that Google gives users immense control over all of this. You don’t need to worry about Google.

What you should be worrying about is the entities out there who gather your location data without your consent or control, who usually never tell you what they’re doing with it. They hoard that data pretty much forever, and use it, sell it, and abuse it in ways that would make your head spin.

A partial list? Your cellular carrier. They know where your phone is whenever it’s on their network. They collect this data in great detail. Turning off your GPS doesn’t stop them — they use quite accurate cell tower triangulation techniques in that case. Most of these carriers (unlike Google, who has very tight controls) have traditionally provided this data to authorities with just a nod and a wink!

Or how about the license plate readers that police and other government agencies have been deploying like mad, all over the country! They know where you drive, when you travel — and they collect this data in most cases with no real controls over how it will be used, how long it will be held, and who else can get their hands on it! You want someone to be worried about, worry about them!

And the list goes on.

It’s great for headlines and clickbait to pound on Google regarding location data, but they’re on the side of the angels in this debate.

And that’s the truth.

–Lauren–

Google Must End Its Silence About Censored Search in China

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

– – –

It has now been more than a week since public reports began surfacing alleging that Google has been working on a secret project — secret even from the vast majority of Googlers — to bring Chinese government-censored Google search and news back to China. (Background info at: “Google Haters Rejoice at Google’s Reported New Courtship of China” – https://lauren.vortex.com/2018/08/03/google-haters-rejoice-at-googles-reported-new-courtship-of-china).

While ever more purported details regarding this alleged effort have been leaking to the public, Google itself has apparently responded to the massive barrage of related inquiries only with the “non-denial denial” that they will not comment on speculation regarding their future plans.

This radio silence has seemingly extended to inside Google as well, where reportedly Google executives have yet to issue a company-wide explanation to the Google workforce, which includes many Googlers who are very concerned and upset about these reports.

With the understanding that it’s midsummer with many persons on vacation, it is still of great concern that Google has gone effectively mute regarding this extremely important and controversial topic. The silence suggests internal management confusion regarding how to deal with this situation. It’s upsetting to Google’s fans, and gives comfort to Google’s enemies.

Google needs to issue a definitive public statement addressing these concerns. Regardless of whether the project actually exists as reports have described — or if those detailed public reports have somehow been false or misleading — Google needs to come clean about what’s actually going on in this context.

Google’s users, employees, and the global community at large deserve no less.

Google, please do the right thing.

–Lauren–

Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

– – –

It’s already happening. Within a day of word that Google is reportedly planning to provide Chinese government-dictated censored search results and censored news aggregation inside China, the Google Haters are already salivating at the new ammunition that this could provide Congress to pillory Google and similarly castrate them around the world — for background, please see: “Censored Google Search for China Would Be Both Evil and Dangerous!” (https://lauren.vortex.com/2018/08/01/censored-google-search-for-china-would-be-both-evil-and-dangerous).

While Google has not confirmed these reports, the mere prospect of their being correct has already brought the righteous condemnation of human rights advocates and organizations around the globe.

And already, in the discussion forums that I monitor where the Google Haters congregate, I’m seeing language like “Godsend!” – “Miracle!” — “We couldn’t have hoped for anything more!”

It’s obvious why there’s such rejoicing in those evil quarters. By willingly allying themselves with the censorship regimes of the Chinese government that are used to repress and torment the Chinese people, Google would put itself in the position of being perceived as the willing pawn of those repressive Chinese Internet policies that have been growing vastly more intense, fanatical, and encompassing over recent years, especially since the rise of “president for life” Xi Jinping.

Already embroiled in antitrust and content management/censorship controversies here in the U.S., the European Union, and elsewhere, the unforced error of “getting in bed” with the totalitarian Chinese government will provide Google’s political and other enemies a whole new line of attack to question Google’s motives and ethical pronouncements. You can already visualize the Google-hating congressmen saying, “Whose side are you on, Google? Why are you helping to support a Chinese government that massively suppresses its own people and continues to commit hacking attacks against us?” We’ll be hearing the word “hypocritical” numerous times during numerous hearings, you can be sure. 

We can pretty well predict Google’s responses, likely to be much the same as they made back in 2006 during their original attempt at “playing nice” with the Chinese censors, an effort Google abandoned in 2010, after escalating demands from China and escalating Chinese hacking attacks.

Google will assert that providing some services — even censored in deeply repressive ways — is better than nothing. They’ll suggest that the censored services that would be provided would help the Chinese citizenry, despite the fact that the very results being censored, while perhaps relatively small in terms of overall percentages, would likely be the very search results that the Chinese people most need to see to help protect themselves from their dictatorial leaders’ information control and massive human rights abuses. Google will note that they already censor some results in countries like France and Germany (for example, there are German laws relating to Nazi-oriented sites).

But narrow removal of search results in functional democracies is one thing The much wider categories of censorship demanded by the Chinese government — a single-party dictatorship that operates vast secret prison and execution networks — is something else entirely. It’s like comparing a pimple with Mt. Everest. 

And that’s before the Chinese start escalating their demands. More items to censor. Access to users’ identity and other private data. Localization of Google servers on Chinese soil for immediate access by authorities.

Worst of all, if Google is willing to bend over and kowtow to the Chinese dictators in these ways, every other country in the world with politicians unhappy with Google for one reason or another will use this as an example of why Google should provide similar governmental censorship services and user data access to their own regulators and politicians. After all, if you’re willing to do this for one of the world’s most oppressive regimes, why not for every country, everywhere?

As someone with enormous respect for Google and Googlers, I can’t view these reports regarding Google and China — if accurate — as anything short of disastrous. Disastrous for Google. Disastrous for their users. Disastrous for the global community of ordinary users at large, who depend on Google’s search honesty and corporate ethics as foundations of daily life.

Joining with China in providing Chinese government-censored search and news results would provide haters and other evil forces around the planet the very ammunition they’ve been waiting for toward crushing Google, towards putting Google under micromanaged government control, toward ultimately converting Google into an oppressive government propaganda machine.

It could frankly turn out much worse for the world than if Google had never been created at all, 20 years ago.

I’m still hoping that these reports are inaccurate in key respects or in their totality. But even if they are correct, then Google still has time to choose not to go down this dark path, and I would strongly urge them not to move forward with any plans to participate in China’s repressive and dangerous totalitarian censorship regime.

–Lauren–

Prediction: Unless Security Keys Are Free, Most Users Won’t Use Them

Various major Internet firms are currently engaged in a campaign to encourage the use of U2F/FIDO security keys (USB, NFC, and now even Bluetooth) to encourage their users to avoid use of other much more vulnerable forms of 2sv (2-factor) login authentication, especially the most common and illicitly exploitable form, SMS text messaging. In fact, Google has just introduced their own “Titan” security keys to further these efforts.

Without getting into technical details, let’s just say that these kinds of security keys essentially eliminate the vulnerabilities of other 2sv mechanisms, and given that most of these keys can support multiple services on a single physical key, you might assume that users would be snapping them up like candy.

You’d be wrong in that assumption.

I’ve spent years urging ordinary users (e.g., of Google services) to use 2sv of any kind. It’s a very, very tough slog, as I noted in:

Google Users Who Want to Use 2-Factor Protections — But Don’t Understand How: https://lauren.vortex.com/2017/06/10/google-users-who-want-to-use-2-factor-protections-but-dont-understand-how

But even beyond that category of users, there’s a far larger group of users who simply don’t see the point with “hassling” to use 2sv at all, resulting in what Google itself has publicly noted is a depressingly low percentage of users enabling 2sv protections.

Beyond logistical issues regarding 2sv that confuse many potential users, there’s a fundamental aspect of human nature involved.

Most users simply don’t believe that THEY are going to be hacked (at least, that’s their position until it actually happens to them and they come calling too late with desperate pleas for assistance).

Frankly, I don’t know of any “magic wand” solution for this dilemma. If you try to require 2sv, you’ll likely lose significant numbers of users who just can’t understand it or give up trying to make it work — bad for you and bad for them. They’re mostly not techies — they’re busy people who depend on your services, who simply do not see any reason why they should be jumping through what they perceive to be more unnecessary hoops — and this means that WE have not explained this all adequately and that OUR systems are not serving them well.

If you blame the users, you’ve already lost the argument.

Which brings us back to those security keys. Given how difficult it is to get most users to enable 2sv at all, how much harder will it be (even if the overall result is simpler and far more secure) to get users to go the security key route when they have to pay real money for the keys?

For many persons, the $20 or so typical for these keys is significant money indeed, especially when they don’t see the value of really having them in the first place (remember, they don’t expect to ever be hacked).

I strongly suspect that beyond “in the know” business/enterprise users, achieving major uptake of security keys among ordinary user populations will require that those keys be provided for free in some manner. Pricing them down to only a few dollars would help, but my gut feeling is that vast numbers of users wouldn’t pay for them at any price, perhaps often because they don’t want to set up payment methods in the first place.

That problem may be significantly reduced where users are already used to paying and have payment methods already in place — e.g. for the Android Play Store. 

But even there, $20 — even $10 — is likely to be a very tough sell for a piece of hardware that most users simply don’t really believe that they need. And if they feel that this purchase is being “pushed” at them as a hard sell, the likely result will be resentment and all that follows from that.

On the other hand, if security keys were free, methodologies such as:

How to “Bribe” Our Way to Better Account Security: https://lauren.vortex.com/2018/02/11/how-to-bribe-our-way-to-better-account-security

might be combined with those free keys to dramatically increase the use of high quality 2sv by all manner of users — including techies and non-techies — which of course should be our ultimate goal in these security contexts.

Who knows? It just might work!

Be seeing you.

–Lauren–

Censored Google Search for China Would Be Both Evil and Dangerous!

UPDATE (August 17, 2018): Google Admits It Has Chinese Censorship Search Plans – What This Means

UPDATE (August 9, 2018): Google Must End Its Silence About Censored Search in China

UPDATE (August 3, 2018): Google Haters Rejoice at Google’s Reported New Courtship of China

UPDATE (August 2, 2018): New reports claim that Google is also now working on a news app for China, that would similarly be designed to enable censoring by Chinese authorities. Google has reportedly replied to queries about this with the same non-denial generic statement noted below.

– – –

A report is circulating widely today — apparently based on documents leaked from Google — suggesting that Google is secretly working on a search engine interface (probably initially an Android app) for China that would — by design — be heavily censored by the totalitarian Chinese government. Want to look at a Wikipedia page? Forget it! Search for human rights? No go, and the police are already at your door to drag you off to a secret “re-education” center.

Google has so far not denied the reports, and today has discouragingly only issued generic “we don’t comment on speculation regarding future plans” statements. Ironically, this is all occurring at the same time that Google has been increasing its efforts to promote honest journalism, and to fight against fake news that can sometimes pollute search results.

There’s no way to say this gently or diplomatically: Any move by Google to provide government censored search services to China would not only be evil, but also incredibly dangerous to the entire planet.

The Chinese are wonderful people, but their government is an absolute dictatorship — now with a likely president for life — whose abuse of its own citizens and hacking attempts against the rest of the world have been increasing over recent years. Not getting better, getting far, far worse.

Information control and censorship is at the heart of China’s human rights abuses that include a vast network of secret prisons and undocumented mass executions. Say the wrong thing. Try to look at the wrong webpage. You can just vanish, never to be seen again.

The key to how the Chinese tyrants control their population is the government’s incredibly massive Internet censorship regime, which carefully tailors the information that the Chinese population can see, creating a false view of the world among its citizens — incredibly dangerous for a country that has a vast military and expansionist goals.

Anybody — any firm — that voluntarily participates in the Chinese censorship regime becomes an equal partner in the Chinese government’s evil, no matter attempts to provide benign justifications or explanations.

If this all sounds a bit familiar, it’s because we’ve been over this road with Google before. Back in 2006, I happened to be giving a talk at Google’s L.A. offices the same day that Google announced its original partnership with the Chinese government to provide a censored version of Google. My relevant comments about that are here: 

https://www.youtube.com/watch?v=PGoSpmv9ZVc&feature=youtu.be&t=1448

Later related discussion that same year followed, including:

“Google, China, and Ethics” – https://lauren.vortex.com/archive/000180.html

And then in 2010 when Google wisely terminated their participation in the oppressive Chinese censorship regime:

Bulletin: Google Will No Longer Censor Chinese Search Results — May End China Operations – https://lauren.vortex.com/archive/000667.html

In the ensuing eight years, much has changed with China. They’re even more of a technological powerhouse now, and they’re even more dictatorial and censorship-driven than before. 

All the fears about censored Google search for China that we had back in 2006, including a vast slippery slope of additional dangers to innocent persons both inside and outside of China, are still in force — only now magnified by orders of magnitude.

It obviously must be painful for Google to sit by and watch their less ethical competitors cozy up to Chinese human rights abusing leaders, as those firms suckle at the teats of the Chinese government and its money. 

And in fact, Google has already made some recent inroads with China — with a few harmless apps and shared AI research — all efforts that I generally support in the name of global progress.

But search is different. Very different. Search is how we learn about how the world really works. It’s how we separate reality from lies, how we put our lives and our countries in context with the entire Earth that we all must share. The censorship of search is a true Orwellian terror, since it helps not only to hide accurate information, but by extension promotes the dissemination of false information as well.

It’s bad enough that the European Union forces Google (via the “Right To Be Forgotten”) to remove valid and accurate search results pointing to information that some Europeans find to be personally inconvenient. 

But if reports are correct that Google plans to voluntarily ally itself with Chinese dictators and their wholesale censorship of entire vast categories of crucial information — inevitably in the furtherance of those leaders’ continuing tyrannies — then Google will not only have gone directly and catastrophically against its most fundamental purposes and ideals, but will have set the stage for similar demands for vast Google-enabled mass censorship from other countries around the world.

I’m sorry, but that’s just not the Google that I know and respect.

–Lauren–

Explaining YouTube’s VERY Cool New Aspect Ratio Changes

YouTube very quietly made a very cool and rather major improvement in their desktop video players today. I noticed it immediately this morning and now have confirmation both from testing with my own YT videos (for which I know all the native metadata) and via informal statements from Google.

YouTube is now adjusting the YT player size to match videos’ native aspect ratios. This is a big deal, and very much welcome.

Despite the fact that I’m publicly critical from time to time regarding various elements of YouTube content-related policies, this does not detract from the fact that I’m one of YT’s biggest fans. I spend a lot of time in YT, and I consider it to be a news, information, education, and entertainment wonder of the world. Its scale is staggering large — so we can’t reasonably expect perfection — and frankly I don’t even want to think about life without YT.

Excuse me while I put on my video engineering hat for a moment …

One of the more complicated facets of video — played out continuously on YouTube — is aspect ratios. A modern high definition TV (HDTV) video is normally displayed at a 16 (horizontal) by 9 (vertical) aspect ratio – significantly wider than high. The older standard definition TV ratio is 4:3 — just a bit wider than high, and visually very close to the traditional 35mm film aspect ratio of 3:2.

When displaying video, the typical techniques to display different aspect ratios on different fixed ratio display systems have been to either modify the actual contents of the visible video frames themselves, or to fit more of the original frames into the display area by reducing their overall relative sizes and providing “fillers” for any remaining areas of the display. 

The “modification of contents” technique usually has the worst results. Techniques such as “pan and scan” were traditionally used to show only portions of widescreen movie frames on standard 4:3 TVs, simply cutting off much of the action. Ugh. 

But eventually, especially as 4:3 television screens became larger in many homes, the much superior “letterboxing” technique came into play, displaying black bars on the top and bottom of the screen to permit the entire (or at least most) of widescreen film frames to be displayed on a 4:3 cathode ray tube. In the early days of this process, it was common to see squiggles and such in those bars — networks and local stations were concerned that viewers would assume that something was wrong with their televisions if empty black bars were present without some sort of explanations — and sometimes broadcasters would even provide such explanations at the start of the film — sometimes they still do even with HDTV! Very widescreen films shown on 16:9 displays today may still use letterboxing to permit viewing more of the frame that would otherwise exceed the 16:9 ratio.

When 16:9 HDTV arrived, the opposite of the standard definition TV problem appeared. Now to properly display a traditional 4:3 standard TV image, you needed to put black bars on the right and left side of the screen — “pillarboxing” is the name usually given to this technique, and it’s widely used on many broadcast, satellite, streaming, and other video channels. It is in fact by far the preferred technique to display 4:3 content on a fixed aspect ratio 16:9 physical display.

After YouTube switched from a 4:3 video player to their standard 16:9 player years ago, you started seeing some YT uploaders zooming in on 4:3 videos to make them “fake” 16:9 videos before uploading, to fill the 16:9 player — resulting in grainy and noisy images, with significant portions of the original video chopped off. The same thing is done by some TV broadcasters and other video channels, documentary creators, and others who have this uncontrollable urge to fill the entire screen, no matter what! These drive me nuts.

Up until today, YouTube handled the display of native 4:3 videos by using the pillarbox technique within their 16:9 player. Completely reasonable, but of necessity wasting significant areas of the screen taken up by the black pillarbox bars.

This all changed this morning. The YouTube player now adapts to the native aspect ratio of the video being played, instead of always being a fixed 16:9. This means for example that a native 4:3 video now displays in a 4:3 player, with no pillarboxing required — and with significant viewable screen real estate recovered to actually display video rather than pillarboxing bars. In effect, these videos today are displaying similarly to how they would have in the early days of YT — fully filling the video display area — as had been the case before YT switched to fixed aspect ratio 16:9 players. Excellent! 

The same goes for other aspect ratios, in particular such as 16:9 used by most more recent videos, so 16:9 videos will continue to display in a 16:9 player.

One aspect (no pun intended) to keep in mind. The player will apparently adapt to the native video resolution as uploaded. So if a video was uploaded as 4:3, you’ll get a 4:3 player. But if (for example) a 4:3 video was already converted to 16:9 by pillarboxing before being uploaded, YouTube’s encoding pipeline is going to consider this to be a native 16:9 video and display it in a 16:9 player with the black bar pillars intact. Bottom line: If you have 4:3 material to upload, don’t change its aspect ratio, just upload it as native 4:3, pretty please!

Since I watch a fair bit of older videos on YouTube that tend to be in 4:3 aspect ratio, the changes YT made today are great for me. But having the YT player adjust to various native aspect ratios is going to be super for all YT users in the long run. It may take a little time for you to adapt to seeing the player size and shape vary from video to video, but you’ll get used to it. And trust me, you’ll come to love it.

Great work by YouTube. My thanks to the entire YouTube team!

–Lauren–

Uber and Lyft Must Immediately Ban “Peeping Tom” Drivers

In response to a news story revealing that an Uber driver has been (usually surreptitiously) live streaming video and most audio of his passengers without their knowledge or explicit consent — exposing them to ridicule and potentially much worse by his streaming audience, both Uber and Lyft have reportedly simply argued that the practice is legal in that particular (one-party recording permission) state. 

That kind of response is of course absolutely unacceptable and below reproach, demonstrating the utter lack of ethics of these ride sharing firms. They argue that this doesn’t even violate any of their driver terms.

That needs to change — IMMEDIATELY!

Drive sharing firms must ban their drivers from such behavior, and violators should be immediately excised from the platform.

That a vile behavior is legal does not mean that these firms — entrusted with the lives of millions of passengers — must permit drivers to engage in such activities. In fact, these firms already lay out specific “don’t do this!” rules that can prohibit a variety of legal activities by drivers — for the protection of their riders.

If these firms do not act immediately to end such practices by their drivers, they risk not only massive loss of rider trust, but are just begging for this kind of activity to eventually result in a horrific incident involving their passengers — perhaps physical abuse because identity information often leaks on these streams — at the hands of unscrupulous members of the live stream viewing public.

If these firms refuse to ban these practices, their rights to operate in any states where such behavior continues to occur must be withdrawn, and if necessary, legislation passed to force these firms to do the right thing and protect their riders from such abuses.

–Lauren–

EU’s Latest Massive Fine Against Google Will Hurt Europeans Most of All

Have you ever heard anyone seriously say: “Man, there just aren’t enough shopping choices on the Net!” or, “I’d really like my smartphone to be more complicated and less secure!” or … well, you get the idea — nobody actually means stuff like that.

But sadly, this means nothing to the politicians and bureaucrats of the European Union, who are constantly trying to enrich themselves with massive fines against firms like Google, while simultaneously making Internet life ever more “mommy state” government micromanaged for Europeans.

The latest giant fine (which Google quite righteously will appeal) announced by the EU extortion machine is five billion dollars, for claimed offenses by Google related to the Android operating system, all actually aspects of Android that are designed to help users and to provide a secure and thriving foundation for a wide range of applications and user choice.

In fact, in the final analysis, the changes in Android that the EU is demanding would result in much more complicated phones, less secure phones, and ultimately LESS choice for users resulting from alterations that will make life much more difficult (and expensive!) for application developers and users alike.

Why do the EU politicos keep behaving as if they want to destroy the Internet?

It’s because in significant ways that exactly what they have in mind. They don’t like an Internet that the government doesn’t tightly control, where they don’t dictate all aspects of how consumers interact with the Net and what these users are permitted to see and do. Even now, they’re still pushing horrific legislation to create a Chinese-style firewall to vastly limit what kind of content Europeans can upload to the Net, and to destroy businesses that depend on free inbound linking. And these hypocritical EU officials are desperately trying to prop up failing businesses whose business models are stuck in the 20th (or even 19th) centuries, while passing all the costs on to ordinary Europeans — who by and large seem to be quite happy with how the Internet is already working.

And of course, there’s the money. Need more money? Hell, the EU always needs more money. Gin up another set of fake violations against Google, then show up in Mountain View with sticky fingers extended for another multi-billion dollar check!

The EU has become a bigger threat to the Internet than even China or Russia, neither of which has attempted (so far) to extend globally their highly restrictive views of Internet freedoms. 

And the saddest part is that these kinds of abuses by the EU are hurting EU consumers most of all. Over time, fewer and fewer Internet firms will even want to deal with this kind of EU, and Europeans will find their actual choices more and more limited and government controlled as a result.

That’s a terrible shame for Europe — and for the entire world.

–Lauren–

Network Solutions and Cloudflare Supporting Horrific Racist Site

Today a concerned reader brought to my attention a horrifically racist site — apparently operating for a decade and currently registered via Network Solutions (NSI) — with DNS and other services through Cloudflare — called “n*ggermania.com” (and “n*ggermania.net”) — I have purposely not linked to them here, and you know why I have the asterisks there.

To call the site — complete with a discussion forum — a massive pile of horrific, dangerous, racist garbage of the worst kind would be treating it far too gently.

We already know that Cloudflare reportedly has an ethical sense that makes diseased maggots and cockroaches seem warm and friendly by comparison — Cloudflare apparently touts a content acceptance policy that Dr. Josef Mengele would have likely considered too extreme in its acceptance of monstrously evil content.

But Network Solutions claims to have higher standards (though it wouldn’t take much effort to beat Cloudflare in this regard) and I’m attempting to contact NSI officials now to determine if such racist sites are within their official policy standards. 

Oh, and by the way, guess what happens whenever you call the official Network Solutions listed phone number that they designate for “reporting abuse” — you get a recording (that doesn’t take a message) that says “they’re having difficulties — try again at a later time.”

Why are we not at all surprised?

–Lauren–

Chrome Is Hiding URL Details — and It’s Confusing People Already!

UPDATE (September 17, 2018): Google Backs Off on Unwise URL Hiding Scheme, but Only Temporarily

UPDATE (September 7, 2018): Here’s How to Disable Google Chrome’s Confusing New URL Hiding Scheme

– – –

Here we go again. I’m already getting upset queries from confused Chrome users about this one. 

In Google’s continuing efforts to “dumb down” the Internet, their Chrome browser (beta version, currently) is now hiding what it considers to be “irrelevant” parts of site address URLs.

This means for example that if you enter “vortex.com” and get redirected to “www.vortex.com” as is my policy (and a typical type of policy at a vast number of sites), Chrome will only display “vortex.com” as the current URL, confusing anyone and everyone who might have a need to quickly note the actual full address URL. Also removed are http: and https: prefixes, leaving even fewer indications when sites are secure — exactly the WRONG approach these days when users need more help in these respects, not less!

And of course, if you’re manually writing down a URL based on the shortened version, there’s no guarantee that it will actually work if entered directly back into Chrome without passing through possible site redirect sequences.

But wait! You said that you want additional confusion? By golly you’ve got it! If you click up in the address bar and copy the Chrome shortened URL, it will appear that you’re copying the short version, but you’re actually copying the invisible original version with the full site URL — including the full address and the http: or https: prefixes. If you double click up there, Chrome visibly replaces its mangled version with the full version.

I can just imagine how this “feature” pushed through Google — “Hell, our users don’t really need to see all that URL detail stuff, so we’ll just hide it all from them! They’ll never know the difference!”

But the truth is that from the standpoint of everyday users who glance quickly at addresses and greatly benefit from multiple signals to help them establish that they’ve reached the exact and correct sites in a secure manner, the new Chrome URL mangling feature is an abomination, and I’ll bet you dollars to donuts that crooked site operators will find some ways to leverage this change for their own benefits as well.

As I said, this is currently in Chrome Beta, which means it’s likely to “graduate” to Chrome Stable — the one that most people run — sometime fairly soon.

Google is a great company, but their ability to churn out unforced errors like this — that especially disadvantage busy, non-techie users — remains a particularly bizarre aspect of their culture.

–Lauren–

Third Parties Reading Your Gmail? Yeah, If You’ve Asked Them To!

Looks like the “Wall Street Journal” — pretty reliably anti-Google most of the time — is at it again. My inbox is flooded with messages from Google users concerned about the WSJ’s new article (being widely quoted across the Net) “exposing” the fact that third parties may have access to your Gmail.

Ooooh, scary! The horror! Well, actually not!

This one’s basically a nothingburger.

The breathless reporting on this topic is the “revelation” that if you’ve signed up with third-party apps and given them permission to access your Gmail, they — well, you know — have access to your Gmail! 

C’mon boys and girls, this isn’t rocket science. If you hire a secretary to go through your mail and list the important stuff for ya’, they’re going to be reading your mail. The same goes for these third-party apps that provide various value-added Gmail services to notify you about this, that, or the other. They have to read your Gmail to do what you want them to do! If you don’t want them reading your email, don’t sign up for them and don’t give them permission to access your Google account and Gmail! 

Part of the feigned outrage in this saga is the concern that in some cases actual human beings at these third-party firms may have been reading your email rather than only machines. Well golly, if they didn’t explicitly say that humans wouldn’t read them — remember that secretary? — why would one make such an assumption?

In fact, while it’s typical for the vast majority of such third-party systems to be fully automated, it wouldn’t be considered unusual for humans to read some emails for training purposes and/or to deal with exception conditions that the algorithms couldn’t handle. 

Seriously, if you’re going to sign up for third-party services like these — even though Google does carefully vet them — you should familiarize yourself with their Terms of Service if you’re going to be concerned about these kinds of issues.

Personally, I don’t give any third parties access to my Gmail. This simplifies my Gmail life considerably. Google has excellent internal controls on user data, and I fully trust Google to handle my data with care. Q.E.D.

And by the way, if you’ve lost track of third-party systems to which you may have granted access to your Gmail or other aspects of your Google account, there’s a simple way to check (and revoke access as desired) at the Google link:

https://myaccount.google.com/permissions

But really, if you don’t want third parties reading your Gmail, just don’t sign up with those third parties in the first place!

Be seeing you.

–Lauren–

Why Google Needs a “User Advocate at Large”

For many years I’ve been promoting the concept of an “ombudsman” to help act as an interface between Google and its user community. I won’t even bother listing the multitude of my related links here, they’re easy enough to find by — yeah, that’s right — Googling for them.

The idea has been to find a way for users — Google’s customers who are increasingly dependent on the firm’s services for an array of purposes (irrespective of whether or not they are “paying” users) — to have a genuine “seat at the table” when it comes to Google-related issues that affect them.

My ombudsmen concepts have consistently hit a figurative brick wall at the Googleplex. A concave outline of my skull top is probably nearly visible on the side of Building 43 by now.

Who speaks for Google’s ordinary users? That’s the perennial question as we approach Google’s 20th birthday, almost exactly two months from now.

Google’s communications division speaks mainly to the press. Google developer and design advocates help to make sure that relevant developer-related parties are heard by Google’s engineering teams. 

But beyond these specific scopes, there really aren’t user advocates per se at Google. In fact, a relevant Google search yields entries for Google design and developer advocates, and for user advocates at other firms. But there’s no obvious evidence of dedicated user advocate roles at Google itself.

Words matter. Precision of word choices matters. And in thinking about this recently, I’ve realized that my traditional use of the term “ombudsman” to address these concerns has been less than optimal.

Part of the reason for this is that the concept of “ombudsman” (which can be a male or female role, of course) carries with it a great deal of baggage. I realized this all along and attempted to explain that such roles were subject to definition within any given firm or other organization. 

But ombudsman is a rather formal term and is frequently associated with a person or persons who mainly deal with escalated consumer complaints, and so the term tends to carry an adversarial implication of sorts. The word really does not encompass the broader meanings of advocacy — and other associated communications between firms and users — that I’ve been thinking about over the years — but that I’ve not been adequately describing. I plead guilty.

“User advocacy” seems like a much more accurate term to approach the concepts that I’ve been so long discussing about Google and its everyday users.

Advocacy, not contentiousness. Participation, not confrontation. 

While it would certainly be possible to have user advocates focused on specific Google products and services, the multidisciplinary nature of Google suggests that an “at large” user advocate, or a group of such advocates working to foster user communications across a wide range of Google’s teams, might be more advantageous all around.

Google and Googlers create excellent services and products. But communications with users continues to be Google’s own Achilles’ heel, with many Google-related controversies based much more on misunderstandings than on anything else.

A genuine devotion to user advocacy, fostered by Googlers dedicated to this important task, could be great for Google’s users and for Google itself.

–Lauren–

Google’s New Security Warning Is Terrifying Many Users

I’ve been getting email from people all over the world who are suddenly scared of accessing particular websites that they’ve routinely used. It was quickly obvious what is going on — the first clue was that they were all users running Chrome Beta. 

The problem: Google’s new “Not Secure” warning on sites not using https security is terrifying many people. Users are incorrectly (but understandably) interpreting “Not Secure” to mean “Dangerous and Hacked! Close this page now!”

And this is squarely Google’s fault.

Years ago, I predicted this outcome. 

Though I’ve long promoted the migration to secure Web connections via https, I’ve also repeatedly warned that there are vast numbers of widely referenced sites that provide enormous amounts of important information to users, often from archives and systems that have been running for many, many years — sometimes since before the beginnings of Google 20 years ago.

The vast majority of these sites don’t require login. They don’t request information from users. They are utterly read-only.

While non-encrypted connections to them are theoretically subject to man-in-the-middle attacks, the real world likelihood of their being subjected to such attacks is extraordinarily low.

Another common factor with many of these sites is that they are operating on a shoestring, often on donated resources, without the expertise, money, or time to convert to https. Many of these systems are running very old code, conversion of which to support https would be a major effort — even if someone were available to do the work.

Despite ongoing efforts by “Let’s Encrypt” and others to provide tools to help automate the transition to https, the reality is that it’s still usually a massive amount of work requiring serious expertise, for all but the smallest and simplest of sites — and even that’s for sites running relatively current code.

Let’s be utterly clear about this. “Not Secure” does not mean that a site is actually hacked or dangerous in any way, nor that its data has been tampered with in transit. 

But to many users — not all of whom are well versed on the fine points of Internet security, eh? — that kind of warning displayed in that manner is a guarantee of more unnecessary confusion and angst among large categories of users, many of whom are already feeling disadvantaged by other aspects of the Web, such as Google’s continuing accessibility failures in terms of readability and other user interface aspects, disproportionately affecting these growing classes of users.

With Google about to promote their “Not Secure” warning from Google Beta to the standard Google Stable that most people run, these problems are about to grow by orders of magnitude.

Through their specific interface design decisions in this regard, Google is imposing an uncompensated cost on many sites with extremely limited resources, a cost that could effectively kill them.

Might doesn’t always make right, and Google needs to rethink the details of this particular approach.

–Lauren–

FedEx to Anyone With Less than Perfect Vision: GO TO HELL!

It appears that shipping giant FedEx has joined the “Google User Interface Club” and introduced a new package tracking user interface designed to tell anyone with less than very young, very excellent vision that they can just go take a giant leap and are not desirable as customers — either to send or receive packages via FedEx.

As you can see in the screenshot at the end of this post (if you can actually see FedEx’s incredibly low contrast fonts that is — trust me, they are actually there!), FedEx has transitioned from their traditional easy to read interface to the new “Google Standard” interface — low contrast fonts that are literally impossible for many people to read, and extremely difficult to read for many others without Superman-grade vision.

I’ve written about these kinds of accessibility failures many times in the past — I suspect that some of them may rise to the level of violations of the Americans with Disabilities Act (ADA). They are designed to look pretty — and technically may even meet some minimum visibility standards if you have a nice, new, perfectly adjusted screen. But it doesn’t take a computer scientist to realize that their real world readability is a sick joke, a kick in the teeth to anyone with aging eyes or otherwise imperfect vision.

The U.S. Postal Service recently moved their tracking interface in this same direction, and while theirs is bad, it’s not quite as much of an abomination as this new FedEx monstrosity.

Google pushed this trend along, with many of their relatively recent interfaces representing design nightmares in terms of readability and usability for users who are apparently not in Google’s “we really care about you” target demographics. Google’s recent refresh of Gmail has been a notable and welcome exception to this trend. I’m hoping that they will continue to move in a positive direction with other upcoming interfaces, though frankly I’m not holding my breath quite yet.

In the meantime, it’s FedEx who deserves an immediate kick in the teeth. Shame on you, FedEx. For shame.

–Lauren–

How the Pentagon Is Trying to Shame Google and Googlers

I hadn’t been planning to say much more right now about Google and “Project Maven” — the Defense Department project in which Google will wisely not be renewing participation when the existing contract ends next year (https://lauren.vortex.com/2018/05/31/google-dod-disturbing-maven-ai-document).

But as usual, the Pentagon just doesn’t know when to leave well enough alone, and I am very angry today to see that a Pentagon-affiliated official is attempting to “death shame” Google and its employees regarding their appropriate decision not to renew with Maven. 

This particularly upsets me because I’ve been to this rodeo before. Over the years I’ve turned down potential work — that I really could have used! — because of its direct relationship to actual battlefield operations. And in various of those cases, there were attempts made to “death shame” me as well — to tell me that if I refused to participate in those aspects of the military-industrial complex, I would be morally complicit for any potential U.S. forces deaths that might theoretically occur due to lack of my supposed expertise.

This is a technique of the military that is as old as civilization. Various technologists reaching back to the days of Mesopotamia — and likely earlier — have been asked (or been required, often under threat of death) — to provide their services for ongoing military operations.

What makes this so difficult is that typically it’s impossible to clearly separate defensive from offensive projects. As I’ve previously noted, all too often what appears to be defensive work morphs into attack systems, and in the hands of some leaders (especially lying, sociopathic ones) can easily end up extinguishing vast numbers of innocent lives.

This was explicitly acknowledged in the infuriating words earlier today by a former top U.S. Defense Department official — former Deputy Defense Secretary Robert O. Work, who initiated Project Maven:

“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves.”

He also suggested that Google was being hypocritical, because in his view their AI research cooperation with China would benefit China’s military.

His statements are textbook Pentagon doublespeak, and his assertions are not only fundamentally disingenuous, but are blatant attempts at false equivalences.

Particularly galling is his “might wind up with us taking a shot” reference, as if to say that offensive operations were merely a minor footnote in the battle plan. But when you’re dealing with operational battle data, there are no minor footnotes in this context — that data analysis will be used for offensive operations — you can count on it.

To be clear, the righteous defense of the USA is an admirable pursuit. But if one chooses to go all in with the military-industrial complex to that end, it’s at the very least a decision to be made with “eyes wide open” — not with false assumptions that your work will be purely defensive.

And for those of us who refuse to work on military projects that will ultimately be used offensively — keeping in mind the horrific missteps of presidents far less twisted and bizarre than the one currently in the Oval Office — there is absolutely no valid shame associated with that ethical decision.

There’s a critical distinction to be made between basic research and operational battle projects. It’s much the same distinction as my willing work on the DoD ARPANET project decades ago — that led directly to the Internet that you’re using right now — vs. a range of ongoing, specifically battle-oriented projects with which I refused to become associated.

This is also what gives the lie to Robert Work’s attempt to discredit Google’s AI work with China. Open AI research is like Open Source software itself — usable for good or evil, but open to all and light years away from projects primarily with battle intents.

Google and other firms — including their managements and employees — will of course need to find their own paths forward in term of what sorts of work and contracts they are willing to pursue that may involve the Department of Defense or other military-associated organizations. As we’ve seen with ARPANET, some basic research work funded by the military can indeed yield immense positive benefits to the country and the world.

Personally, I find the concept of a dividing line between such basic research — as opposed to clearly battle-oriented projects — to be a useful guide toward determining which sorts of projects meet my own ethical standards — and which ones do not. As the saying goes, your mileage may vary.

But in any case, we should all utterly ignore Robert Work’s repulsive attempt to shame Googlers and Google itself — and relegate his way of thinking to the dustbin of history where it truly belongs.

–Lauren–

How the Dominant ISPs Are Trying to Scare People Into Opposing California Net Neutrality

Are there any sordid depths to which the crooked, lying, dominant ISPs won’t go to try terrify people into opposing Net Neutrality in California? Nope, let’s face it, these firms spout outright lies as if they were Donald Trump. Yep, seriously evil, as this robocall voicemail currently in circulation so clearly demonstrates! – https://lauren.vortex.com/crooked-isps-ca-822.mp3

–Lauren–

A Modest Proposal: Identifying Europeans on the Internet for Their Protection

With European politicians and regulators continuing to churn out proposed regulations to protect their citizens from the evils of the Internet, via “The Right To Be Forgotten” — and the currently under consideration Article 11 “link tax” and Article 13 content filtering censorship proposals — it is becoming more important than ever that Internet sites around the world be able to identify European users so that they may be afforded “appropriate” treatment at those sites, including blocking from all services as necessary.

Already, some Europeans are suggesting that they will attempt to evade the restrictions that have been implemented or proposed by their beneficent and magnificent leaders. The world must band together to prevent Europeans users from pursuing such a tragic course of actions.

Obviously, all VPN usage by Europeans that attempt to obscure the European geographic locations of their source IP addresses must be banned. In fact, it would be even safer for Europeans if all usage of VPNs by Europeans were prohibited by their governments, except under extraordinary circumstances requiring government licenses and monitoring for inappropriate usage.

All web browsers used by Europeans should be required to send a special “protected European resident” flag to server sites, so that those sites may determine the appropriate blocking or other disposition of those browser requests. Use of unapproved browsers or tampering with browsers to remove this protection flag would of course be a criminal act.

We must also solve the problem of Europeans traveling outside of Europe, where they might be tempted to use public Internet access systems that do not meet the high standards of protection required by European regulations.

One possible solution to this dilemma would be to require the permanent implantation of RFID identification capsules in all Europeans who travel beyond the protected confines of Europe. Don’t worry — these need not individually identify any given person, they need only identify them as European. Scanning equipment at public computers around the planet could detect these implants and automatically apply appropriate European protection rules. Europeans would be free to travel the world with no fears of accidentally using systems that did not apply their government’s protective regulations!

This modest proposal of course only scratches the surface of the sorts of solutions that will be needed to help assure that EU citizens fully and completely abide by their governments’ benevolent actions and requirements.

But the EU and its residents can feel confident that the rest of the world’s Internet will do its part to help keep Europeans safe, secure, and law-abiding at all times!

–Lauren–