The Right’s (and Left’s) Insane Internet Content Power Grab

Views: 319

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Another Breach: What Capital One Could Have Learned from Google’s “BeyondCorp”

Views: 2701

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with Capital One, the bank whose credit card customers and card applicants were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates traditional firewalls (and in nearly all instances, VPNs) because there is no longer any need for such devices or systems that, once breached, give an attacker access to internal goodies.

If Capital One had been following BeyondCorp principles, there’d likely be 100+ million fewer potentially panicky people today.

–Lauren–

Earthquakes vs. Darth Vader

Views: 340

When the Ridgecrest earthquake reached L.A. yesterday evening (no damage this far from the epicenter from that quake or the one the previous day) I was “in” a moving elevator under attack in the “Vader Immortal” Oculus Quest VR simulation. I didn’t realize that there was a quake at all, everything seemed part of the VR experience (haptic feedback in the hand controllers was already buzzing my arms at the time).

The only oddity was that I heard a strange clinking sound, that at the time had no obvious source but that I figured was somehow part of the simulation. Actually, it was probably the sound of ceiling fan knob chains above me hitting the glass light bulb fixtures as the fan was presumably swaying a bit.

Quakes of this sort are actually very easy to miss if you’re not sitting or standing quietly (I barely felt the one the previous day and wasn’t immediately sure that it was a quake), but I did find my experience last night to be rather amusing in retrospect.

By the way, “Vader Immortal” — and the Quest itself — are very, very cool, very much 21st century “sci-fi” tech finally realized. My thanks to Oculus for sending me a Quest for my experiments.

–Lauren–

YouTube’s Public Videos Dilemma

Views: 539

So there’s yet another controversy surrounding YouTube and videos that include young children — this time concerns about YouTube suggesting such videos to “presumed” pedophiles.

We can argue about what YouTube should or should not be recommending to any given user. There are some calls for YT to not recommend such videos when it detects them (an imperfect process) — though I’m not convinced that this would really make much difference so long as the videos themselves are public.

But here’s a more fundamental question:

Why the hell are parents uploading videos of young children publicly to YouTube in the first place?

This is of course a subset of a more general issue — parents who apparently can’t resist posting all manner of photos and other personal information about their children in public online forums, much of which is going to be at the very least intensely embarrassing to those children when they’re older. And the Internet rarely ever forgets anything that was ever public (the protestations of EU politicians and regulators notwithstanding).

There are really only two major possibilities concerning such video uploads. Either the parents don’t care about these issues, or they don’t understand them. Or perhaps both.

Various apps and web pages exist that will automatically display YT videos that have few or no current views from around the world. There’s an endless stream of these. Thousands. Millions? Typically these seem as if they have been automatically uploaded by various camera and video apps, possibly without any specific intentions for the uploading to occur. Many of these involve schools and children.

So a possible answer to my question above may be that many YT users — including parents of young children — are either not fully aware of what they are uploading, or do not realize that the uploads are public and are subject to being suggested to strangers or found by searching. 

This leads us to another question. YT channel owners already have the ability to set their channel default privacy settings and the privacy settings for each individual video. 

Currently those YT defaults are initially set to public.

Should YT’s defaults be private rather than public?

Looking at it from a user trust and safety standpoint, we may be approaching such a necessity, especially given the pressure for increased regulatory oversight from politicians and governments, which in my opinion is best avoided if at all possible.

These questions and their ramifications are complex to say the least.

Clearly, default channel and videos privacy would be the safest approach, ensuing that videos would typically only be shared to specific other users deemed suitable by the channel owner. 

All of the public sharing capabilities of YT would still be present, but would require the owner to make specific decisions about the channel default and/or individual video settings. If a channel owner wanted to make some or all of their videos public — either to date or also going forward, that would be their choice. Full channel and individual videos privacy would only be the original defaults, purely as a safety measure.

Finer-grained settings might also be possible, not only including existing options like “unlisted” videos, but also specific options to control the visibility of videos and channels in search and suggestions.

Some of the complexities of such an approach are obvious. More controls means the potential for more user confusion. Fewer videos in search and suggestions limits visibility and could impact YT revenue streams to both Google and channel owners in complex ways that may be difficult to predict with significant accuracy.

But in the end, the last question here seems to be a relatively simple one. Should any YouTube uploaders ever have their videos publicly available for viewing, search, or suggestions if that was not actually their specific and informed intent?

I believe that the answer to that question is no.

Be seeing you.

–Lauren–

A Major New Privacy-Positive Move by Google

Views: 1044

Almost exactly two years ago, I noted here the comprehensive features that Google provides for users to access their Google-related activity data, and to control and/or delete it in a variety of ways. Please see:

The Google Page That Google Haters Don’t Want You to Know About – https://lauren.vortex.com/2017/04/20/the-google-page-that-google-haters-dont-want-you-to-know-about

and:

Quick Tutorial: Deleting Your Data Using Google’s “My Activity” – https://lauren.vortex.com/2017/04/24/quick-tutorial-deleting-your-data-using-googles-my-activity

Today Google announced a new feature that I’ve long been hoping for — the option to automatically delete these kinds of data after specific periods of time have elapsed (3 month and 18 month options). And of course, you still have the ability to use the longstanding manual features for control and deletion of such data whenever you desire, as described at the links mentioned above.

The new auto-delete feature will be deployed over coming weeks first to Location History and to Web & App Activity.

This is really quite excellent. It means that you can take advantage of the customization and other capabilities that are made possible by leaving data collection enabled, but if you’re concerned about longer term storage of that data, you’ll be able to activate auto-delete and really get the best of both worlds without needing to manually delete data yourself at intervals.

Auto-delete is a major privacy-positive milestone for Google, and is a model that other firms should follow. 

My kudos to the Google teams involved!

–Lauren–