T-Mobile’s Scammy New Online Payment System


Traditionally, one of the aspects of T-Mobile that subscribers have really liked is how quickly and easily they could pay their bills online. A few seconds was usually all that was needed, and it could always be done in a security-positive manner.

No more. T-Mobile has now taken their online payment system over to the dark side, using several well-known methods to try trick subscribers into taking actions that they probably don’t really want to take in most instances.

First, their fancy new JavaScript payment window completely breaks the Chrome browser autofill functions for providing credit card data securely. All credit card data must now be entered manually on that T-Mobile payment page.

One assumes that T-Mobile site designers are smart enough to test such major changes against the major browsers, so perhaps they’re doing this deliberately. But why?

There are clues.

For example, they’ve pre-checked the box for “saving this payment method.” That’s always a terrible policy — many users explicitly avoid saving payment data on individual sites subject to individual security lapses, and prefer to save that data securely in their browsers to be entered onto sites via autofill.

But if a firm’s goal is to encourage people to accept a default of saving a payment method on the site, breaking autofill is one way to do it, since filling out all of the credit card data every time is indeed a hassle.

There’s more. After you make your payment, T-Mobile now pushes you very hard to make it a recurring autopay payment from that payment method. The “accept” box is big and bright. The option to decline is small and lonely. Yeah, they really want you to turn on autopay, even if it means tricking you into doing it.

Wait! There’s still more! If you don’t have autopay turned on, T-mobile shows an alert, warning you that a line has been “suspended” from autopay and urging you to click and turn it back on. They say this irrespective of the fact that you never had autopay turned on for that line in the first place.

No, T-Mobile hasn’t broken any laws with any of this. But it’s all scammy at best and really the sort of behavior we’d expect from AT&T or Verizon, not from T-Mobile.

And that’s the truth.

–Lauren–

Facebook’s Staggeringly Stupid and Dangerous Plan to Fight Revenge Porn


I’m old enough to have seen a lot of seriously stupid ideas involving the Internet. But no matter how incredibly asinine, shortsighted, and nonsensical any given concept may be, there’s always room for somebody to come up with something new that drives the needle even further into the red zone of utterly moronic senselessness. And the happy gang over at Facebook has now pushed that poor needle so hard that it’s bent and quivering in total despair. 

Facebook’s new plan to fight the serious scourge of revenge porn is arguably the single most stupid — and dangerous — idea relating to the Internet that has ever spewed forth from a major commercial firm. 

It’s so insanely bad that at first I couldn’t believe that it was real — I assumed it was a satire or parody of some sort. Unfortunately, it’s all too real, and the sort of stuff that triggers an urge to bash your head into the wall in utter disbelief.

The major Internet firms typically now have mechanisms in place for individuals to report revenge porn photos for takedown from postings and search results. Google for example has a carefully thought out and completely appropriate procedure that targeted parties can follow in this regard to get such photos removed from search results. 

So what’s Facebook’s new plan? They want you to send Facebook your own naked photos even before they’ve been abused by anyone — even though they might never be abused by anyone!

No, I’m not kidding. Facebook’s twisted idea is to collect your personal naked and otherwise compromising sexually-related photos ahead of time, so just in case they’re used for revenge porn later, they can be prevented from showing up on Facebook. Whether or not it’s a great idea to have photos like that around in the first place is a different topic, but note that by definition we’re talking about photos already in your possession, not secret photos surreptitiously shot by your ex — which are much more likely to be the fodder for revenge porn.

Now, you don’t need to be a security or privacy expert, or a computer scientist, to see the gaping flaws in this creepy concept. 

No matter what the purported “promises” of privacy and security for the transmission of these photos and how they’d be handled at Facebook, they would create an enormous risk to the persons sending them if anything happened to go wrong. I won’t even list the voluminous possibilities for disaster in Facebook’s approach — most of them should be painfully obvious to pretty much everyone.

Facebook appears to be trying to expand into this realm from a methodology already used against child abuse photos, where such abuse photos already in circulation are “hashed” into digital “signatures” that can be matched if new attempts are made to post them. The major search and social media firms already use this mechanism quite successfully. 

But again, that involves child images that are typically already in public circulation and have already done significant damage.

In contrast, Facebook’s new plan involves soliciting nude photos that typically have never been in public circulation at all — well, at least before being sent in to Facebook for this plan, that is. 

Yes, Facebook will put photos at risk of abuse that otherwise likely would never have been abused!

Facebook wants your naked photos on the theory that holy smokes, maybe someday those photos might be abused and isn’t it grand that Facebook will take care of them for us in advance!

Is anybody with half a brain buying their spiel so far? 

Would there be technically practical ways to send photo-related data to Facebook that would avoid the obvious pitfalls of their plan? Yep, but Facebook has already shot them down.

For example, users could hash the photos using software on their own computers, then submit only those hashes to Facebook for potential signature matching — Facebook would never have the actual photos.

Or, users could submit “censored” versions of those photos to Facebook. In fact, when individuals request that Google remove revenge porn photos, Google explicitly urges them to use photo editing tools to black out the sensitive areas of the photos, before sending them to Google as part of the removal request — an utterly rational approach.

Facebook will have none of this. Facebook says that you must send them the uncensored photos with all the goodies intact. They claim that local hashing won’t work, because they need to have humans verify the original uncensored photos before they’re “blurred” for long-term storage. And they fear that allowing individuals to hash photos locally would subject the hashing algorithms to reverse engineering and exploitation.

Yeah, Facebook has an explanation for everything, but taken as a whole it makes no difference — the entire plan is garbage from the word go.

I don’t care how trusted and angelic the human reviewers of those uncensored submitted nude photos are supposed to be or what “protections” Facebook claims would be in place for those photos. Tiny cameras capable of copying photos from internal Facebook display screens could be anywhere. If human beings at Facebook ever have access to those original photos, you can bet your life that some of those photos are eventually going to leak from Facebook one way or another. You’ll always lose your money betting against human nature in this regard.

Facebook should immediately deep-six, bury, terminate, and otherwise cancel this ridiculous plan before someone gets hurt. And next time Facebook bros, how about doing some serious thinking about the collateral risks of your grand schemes before announcing them and ending up looking like such out-of-touch fools.

–Lauren–

3D Printed Wall Mount for the Full-Sized Google Home


Since the 3D printed wall mount for my Google Home Mini worked out quite nicely (details here), I went ahead yesterday and printed a different type of wall mount for my original Google Home (which is more suited for music listening given its larger and more elaborate speaker system — it even has decent bass response.)

Performance of the Google Home when mounted on the wall seems exemplary, both in terms of audio reproduction and the performance of its integral microphones. 

The surface of the mount meshes with the contours on the bottom of the Google Home unit, providing additional stability.

At the end of this post, I’ve included photos of the printed mount itself, the mount on the wall with Google Home installed, and a very brief video excerpt of the printing process. 

The model for this mount is from “westlow” at: https://www.thingiverse.com/thing:2426589 (I used the “V2” version).

As always, if you have any questions, please let me know. 

Be seeing you.

–Lauren–

(Please click images to enlarge.)

Some Background on 3D Printing Gadgets for the Google Home Mini

UPDATE (October 30, 2017): 3D Printed Wall Mount for the Full-Sized Google Home

– – –

Over on Google+ I recently posted several short items regarding a tiny plastic mount that I 3D printed a couple of days ago to hang my new Google Home Mini on my wall (see 2nd and 3rd photos below, for the actual model file please see: https://www.thingiverse.com/thing:2576121 by “Jakewk13”).

This virtually invisible wall mount is perfectly designed for the Mini and couldn’t be simpler. Technically, the Mini is upside down when you use this mount, but of course it works just fine. Thanks Google for sending me a Mini for my ongoing experiments!

I’ve since received quite a few queries about my printing facilities, such as they are.

So the 1st photo below shows my 3D printer setup. Yes, it looks like industrial gear from one of the “SAW” torture movies, but I like it that way. This is an extremely inexpensive arrangement, where I make up for the lack of expensive features with a fair degree of careful ongoing calibration and operational skill, but it serves me pretty well. I can’t emphasize enough how critical accurate calibration is with 3D printing, and there’s a significant learning curve involved.

The basic unit started as a very cheap Chinese clone printer kit that I built and mounted on that heavy board for stability. Then, hardware guy that I’ve always been, I started modifying. As is traditional, many of the additions and modifications were themselves printed on that printer. This includes the filament reel support brackets, calibration rods, filament guide, inductive sensor mount, and more. I installed an industrial inductive sensor at the forward left of the black extruder unit, to provide more precise Z-axis homing and to enable automatically adjusted print extrusion leveling.

I replaced the original cruddy firmware with a relatively recent Repetier dev build, which also enabled the various inductive sensor functions. I had to compile out the SD card support to make room for this build in my printer controller — but I never used the SD card on the printer (intended for standalone printing) anyway.

On the build platform, I use ordinary masking tape, that gets a thin coat of glue stick immediately after I put the tape down. The tape and glue can last for quite a few prints before needing replacement.

I mainly print PLA filament. I never touch ABS — it warps, its fumes smell awful and are highly toxic.

I almost always print at an extruder temperature of 205C and a bed temperature of 55C.

The printer is driven by Repetier Server which runs on 14.04 Ubuntu via Crouton running on an older CrOS Chromebook. I typically use Linux Cura for model slicing.

I know, it’s all laughably inexpensive and not at all fancy by most people’s standards, but it does the job for me when I want to hang a Google gadget on the wall or need the odd matter-antimatter injector guide servo nozzle in a hurry.

Yep, it really is the 21st century.

–Lauren–

(Please click images to enlarge.)

Understanding Google’s New Advanced Protection Program for Google Accounts


I’ve written many times about the importance of enabling 2-factor authentication on your Google accounts (and other accounts, where available) as a basic security measure, e.g. in “Do I really need to bother with Google’s 2-Step Verification system? I don’t need more hassle and my passwords are pretty good” — https://plus.google.com/+LaurenWeinstein/posts/avKcX7QmASi — and in other posts too numerous to list here.  

Given this history, I’ve now begun getting queries from readers regarding Google’s newly announced and very important “Advanced Protection Program” (APP) for Google accounts — most queries being variations on “Should I sign up for it?”

The APP description and “getting started” page is at:

https://landing.google.com/advancedprotection/

It’s a well designed page (except for the now usual atrocious low contrast Google text font) with lots of good information about this program. It really is a significant increase in security that ordinary users can choose to activate, and yes, it’s free (except for the cost of purchasing the required physical security keys, which are available from a variety of vendors).

But back to that question. Should you actually sign up for APP?

That depends.

For the vast majority of Google users, the answer is likely no, you probably don’t actually need it, given the additional operational restrictions that it imposes.

However, especially for high-profile users who are most likely to be subjected to specifically targeted account attacks, APP is pretty much exactly what you need, and will provide you with a level of account security typically unavailable to most (if any) users at other commercial sites.

Essentially, APP takes Google’s existing 2-factor paradigm and restricts it to only its highest security components. So while USB/Bluetooth security keys are the most secure option for conventional 2-factor use on Google accounts, other 2-factor options like SMS text messages (to name just one) continue to also be available. This provides maximum flexibility for most users, and minimizes the chances of their accidentally locking themselves out of their Google accounts.

APP requires the use of these security keys — the other options are no longer available. If you lose the keys, or can’t use them for some reason, you’ll need to use a special Google account recovery procedure that could take up to several days to complete — a rigorous process to assure that it’s really you trying to regain access to the account.

There are other security-conscious restrictions to your account as well if you enable APP. For example, third-party apps’ access to your account will be significantly restricted, preventing a range of situations where users might otherwise accidentally grant overly broad permissions from outside apps to Google accounts.

It’s important to remember that there do exist situations where you are likely to not be able to use security keys. Public computers (and ironically, computers in high security environments) often have unusable USB ports and have Bluetooth locked in a disabled mode. These can be important considerations for some users.

Cutting to the chase, Google’s standard 2-factor systems are usually going to be quite good enough for most users and offer maximum flexibility — of course only if you enable them — which, yeah, you really should have done by now!

But in special cases for particularly high-profile or otherwise vulnerable Google users, the Advanced Protection Program could be the proverbial godsend that’s exactly what you’ve been hoping for.

As always, feel free to contact me if you have any additional questions about this.

Be seeing you.

–Lauren–