How the Internet Broke the Planet

I am not an optimistic person by nature. I’ve tended — pretty much through my entire life — to always be wary of how things could go wrong. In some ways, I’ve found this to be a useful skill — when writing code it’s important to cover the range of possible outcomes and error states, and properly provide for their handling in a program or app.

Then again, I’ve never been much fun at parties. When I went to parties. Which has been very infrequently.

Mostly, I’ve spent my adult life in front of computer screens of all sorts (and before that, various forms of teletypes, other teleprinters, and even the occasional 029 keypunch machine).

I started writing publicly in the early 70s at the Internet’s ancestor ARPANET site #1 at UCLA, often on the very early mailing lists like Human-Nets, MsgGroup, or SF-Lovers (yes, and Network-Hackers, too). I even monitored the notorious Wine-Tasters list — though not being much of a drinker I uncharacteristically didn’t have much to say there.

Back then there were no domains, so originally I was LAUREN@UCLA-ATS (the first host on ARPANET) and later LAUREN@UCLA-SECURITY as well.

Much of my writing from those days is still online or has been brought back online. Looking it over now, I find that while there are minor points I might change today, overall I’m still willing to stand by everything I’ve written, even from that distant past.

My pessimism was already coming through in some of those early texts. While many in the ARPANET community were convinced that The Network would bring about the demise of nationalities and the grand rising up of a borderless global world of peace and tranquility, I worried that once governments and politicians really started paying attention to what we were doing, they’d find ways to warp it to their own personal and political advantages, perhaps using our technology for new forms of mass censorship.

And I feared that if the kind of networking tech we had created ever found its way into the broader world, evil would ultimately be more effective at leveraging its power than good would be.

Years and decades went by, as I stared at a seemingly endless array of screens and no doubt typed millions of words.

So we come to today, and I’m still sitting here in L.A. — the city where I’ve always lived — and I see how the Internet has been fundamentally broken by evil forces only some of which I foresaw years ago.

Our wonderful technology has been hijacked by liars, Nazis, pedophiles and other sexual abusing politicians, and an array of other despicable persons who could only gladden the hearts of civilization’s worst tyrants.

Our work has been turned into tools for mass spying, mass censorship, political oppression, and the spreading of hateful lies and propaganda without end.

I have never claimed to be evenhanded or dispassionate when it came to my contributions to — and observations of — the Internet and its impact on the world at large.

Indeed the Net is a wonder of civilization, on par with the great inventions like the wheel, like the printing press, like penicillin. But much as nuclear fission can be used to kill cancer or decimate cities, the Internet has proven to be a quintessential tool that can be used for both good and evil, for glories of education and communications and the availability of information, but also for the depths of theft and extortion and hate.

The dark side seems to be winning out, so I won’t pull any punches here. 

I have enormous respect for Google. I have pretty much nothing but disdain for Facebook. My feelings about Twitter are somewhere in between. It’s difficult these days to feel much emotion at all about Microsoft one way or another.

None of these firms — or the other large Internet companies — are all good or all bad. But it doesn’t take rocket science (or computer science for that matter) to perceive how Google is about making honest information available, Facebook is about controlling information and exploiting users, and Twitter doesn’t seem to really care anymore one way or another, so  long as they can keep their wheels turning.

This is obviously something of an oversimplification. Perhaps you disagree with me — sometimes, now, or always — and of course that’s OK too.

But I do want you to know that I’ve always strived to offer my honest views, and to never arbitrarily nor irrationally take sides on an issue. If the result has been that at one time or another pretty much everyone has disagreed with something I’ve said — so be it. I make no apologies for the opinions that I’ve expressed, and I’ve expected no apologies in return.

In the scheme of things, the Internet is still a child, with a lifetime to date even shorter than that of we frail individual human animals. 

The future will with time reveal whether our work in this sphere is seen as a blessing or curse — or most likely as some complex brew of both — by generations yet to come. Some of you will see that future for yourselves, many of us will not.

Such is the way of the world — not only when it comes to technology, but in terms of virtually all human endeavors.

Take care, all.


Google Maps’ New Buddhist “Swastika”

I’m already getting comments — including from Buddhists — suggesting that Google Maps’ new iconography tagging Buddhist temples with the ancient symbol that is perceived by most people today as a Nazi swastika is problematic at best, and is likely to be widely misinterpreted. I agree. I’m wondering if Google consulted with the Buddhist community before making this choice. If not, now is definitely the time to do so.


UPDATE (November 16, 2017): Google tells me that they are restricting use of this symbol to areas like Japan “where it is understood” and are using a different symbol for localization in most other areas. I follow this reasoning, but it’s unclear that it avoids the problems with such a widely misunderstood symbol. For example, I’ve received concerns about this from Buddhists in Japan, who fear that the symbol will be “latched onto” by haters in other areas. And indeed, I’ve already been informed of “Nazi Japan” posts from the alt-right that cite this symbol. The underlying question is whether or not such a “hot button” symbol can really be restricted by localization into not being misunderstood in other areas and causing associated problems. That’s a call for Google to make, of course.

Google’s Extremely Shortsighted and Bizarre New Restrictions on Accessibility Services

UPDATE (November 17, 2017): Thanks Google for working with LastPass on this issue! – Google details Autofill plans in Oreo as LastPass gets reprieve from accessibility removals

 – – –

My inbox has been filling today with questions regarding Google’s new warning to Android application developers that they will no longer be able to access Android accessibility service functions in their apps, unless they can demonstrate that those functions are specifically being used to help users with “disabilities” (a term not defined by Google in the warning).

Beyond the overall vagueness when it comes to what is meant by disabilities, this entire approach by Google seems utterly wrongheaded and misguided.

My assumption is that Google wants to try limit the use of accessibility functions on the theory that some of them might represent security risks of one sort or another in specific situations. 

If that’s actually the case — and we can have that discussion separately — then of course Google should disable those functions entirely — for all apps. After all, “preferentially” exposing disabled persons to security risks doesn’t make any sense.

But more to the point, these accessibility functions are frequently employed by widely used and completely legitimate apps that use these functionalities to provide key features that are not otherwise available under various versions of Android still in widespread deployment.

Google’s approach to this situation just doesn’t make sense. 

Let’s be logical about this.

If accessibility functions are too dangerous from security or other standpoints to potentially be used in all legitimate apps — including going beyond helping disabled persons per se — then they should not be permitted in any apps.

Conversely, if accessibility functions are safe enough to use for helping disabled persons using apps, then they should be safe enough to be used in any legitimate apps for any honest purposes.

The determining factor shouldn’t be whether or not an app is using an accessibility service function within the specific definition of helping a particular class of users, but rather whether or not the app is behaving in an honest and trustworthy manner when it uses those functions.

If a well-behaved app needs to use an accessibility service to provide an important function that doesn’t directly help disabled users, so what? There’s nothing magical about the term accessibility.

Apps functioning honestly that provide useful features should be encouraged. Bad apps should be blown out of the Google Play Store. It’s that simple, and Google is unnecessarily muddying up this distinction with their new restrictions.

I encourage Google to rethink their stance on this issue.


T-Mobile’s Scammy New Online Payment System

Traditionally, one of the aspects of T-Mobile that subscribers have really liked is how quickly and easily they could pay their bills online. A few seconds was usually all that was needed, and it could always be done in a security-positive manner.

No more. T-Mobile has now taken their online payment system over to the dark side, using several well-known methods to try trick subscribers into taking actions that they probably don’t really want to take in most instances.

First, their fancy new JavaScript payment window completely breaks the Chrome browser autofill functions for providing credit card data securely. All credit card data must now be entered manually on that T-Mobile payment page.

One assumes that T-Mobile site designers are smart enough to test such major changes against the major browsers, so perhaps they’re doing this deliberately. But why?

There are clues.

For example, they’ve pre-checked the box for “saving this payment method.” That’s always a terrible policy — many users explicitly avoid saving payment data on individual sites subject to individual security lapses, and prefer to save that data securely in their browsers to be entered onto sites via autofill.

But if a firm’s goal is to encourage people to accept a default of saving a payment method on the site, breaking autofill is one way to do it, since filling out all of the credit card data every time is indeed a hassle.

There’s more. After you make your payment, T-Mobile now pushes you very hard to make it a recurring autopay payment from that payment method. The “accept” box is big and bright. The option to decline is small and lonely. Yeah, they really want you to turn on autopay, even if it means tricking you into doing it.

Wait! There’s still more! If you don’t have autopay turned on, T-mobile shows an alert, warning you that a line has been “suspended” from autopay and urging you to click and turn it back on. They say this irrespective of the fact that you never had autopay turned on for that line in the first place.

No, T-Mobile hasn’t broken any laws with any of this. But it’s all scammy at best and really the sort of behavior we’d expect from AT&T or Verizon, not from T-Mobile.

And that’s the truth.


Facebook’s Staggeringly Stupid and Dangerous Plan to Fight Revenge Porn

I’m old enough to have seen a lot of seriously stupid ideas involving the Internet. But no matter how incredibly asinine, shortsighted, and nonsensical any given concept may be, there’s always room for somebody to come up with something new that drives the needle even further into the red zone of utterly moronic senselessness. And the happy gang over at Facebook has now pushed that poor needle so hard that it’s bent and quivering in total despair. 

Facebook’s new plan to fight the serious scourge of revenge porn is arguably the single most stupid — and dangerous — idea relating to the Internet that has ever spewed forth from a major commercial firm. 

It’s so insanely bad that at first I couldn’t believe that it was real — I assumed it was a satire or parody of some sort. Unfortunately, it’s all too real, and the sort of stuff that triggers an urge to bash your head into the wall in utter disbelief.

The major Internet firms typically now have mechanisms in place for individuals to report revenge porn photos for takedown from postings and search results. Google for example has a carefully thought out and completely appropriate procedure that targeted parties can follow in this regard to get such photos removed from search results. 

So what’s Facebook’s new plan? They want you to send Facebook your own naked photos even before they’ve been abused by anyone — even though they might never be abused by anyone!

No, I’m not kidding. Facebook’s twisted idea is to collect your personal naked and otherwise compromising sexually-related photos ahead of time, so just in case they’re used for revenge porn later, they can be prevented from showing up on Facebook. Whether or not it’s a great idea to have photos like that around in the first place is a different topic, but note that by definition we’re talking about photos already in your possession, not secret photos surreptitiously shot by your ex — which are much more likely to be the fodder for revenge porn.

Now, you don’t need to be a security or privacy expert, or a computer scientist, to see the gaping flaws in this creepy concept. 

No matter what the purported “promises” of privacy and security for the transmission of these photos and how they’d be handled at Facebook, they would create an enormous risk to the persons sending them if anything happened to go wrong. I won’t even list the voluminous possibilities for disaster in Facebook’s approach — most of them should be painfully obvious to pretty much everyone.

Facebook appears to be trying to expand into this realm from a methodology already used against child abuse photos, where such abuse photos already in circulation are “hashed” into digital “signatures” that can be matched if new attempts are made to post them. The major search and social media firms already use this mechanism quite successfully. 

But again, that involves child images that are typically already in public circulation and have already done significant damage.

In contrast, Facebook’s new plan involves soliciting nude photos that typically have never been in public circulation at all — well, at least before being sent in to Facebook for this plan, that is. 

Yes, Facebook will put photos at risk of abuse that otherwise likely would never have been abused!

Facebook wants your naked photos on the theory that holy smokes, maybe someday those photos might be abused and isn’t it grand that Facebook will take care of them for us in advance!

Is anybody with half a brain buying their spiel so far? 

Would there be technically practical ways to send photo-related data to Facebook that would avoid the obvious pitfalls of their plan? Yep, but Facebook has already shot them down.

For example, users could hash the photos using software on their own computers, then submit only those hashes to Facebook for potential signature matching — Facebook would never have the actual photos.

Or, users could submit “censored” versions of those photos to Facebook. In fact, when individuals request that Google remove revenge porn photos, Google explicitly urges them to use photo editing tools to black out the sensitive areas of the photos, before sending them to Google as part of the removal request — an utterly rational approach.

Facebook will have none of this. Facebook says that you must send them the uncensored photos with all the goodies intact. They claim that local hashing won’t work, because they need to have humans verify the original uncensored photos before they’re “blurred” for long-term storage. And they fear that allowing individuals to hash photos locally would subject the hashing algorithms to reverse engineering and exploitation.

Yeah, Facebook has an explanation for everything, but taken as a whole it makes no difference — the entire plan is garbage from the word go.

I don’t care how trusted and angelic the human reviewers of those uncensored submitted nude photos are supposed to be or what “protections” Facebook claims would be in place for those photos. Tiny cameras capable of copying photos from internal Facebook display screens could be anywhere. If human beings at Facebook ever have access to those original photos, you can bet your life that some of those photos are eventually going to leak from Facebook one way or another. You’ll always lose your money betting against human nature in this regard.

Facebook should immediately deep-six, bury, terminate, and otherwise cancel this ridiculous plan before someone gets hurt. And next time Facebook bros, how about doing some serious thinking about the collateral risks of your grand schemes before announcing them and ending up looking like such out-of-touch fools.