Google Security’s User Confusion Continues

As I’ve noted many times, Google has world-class security and privacy teams. Great people.

But at least judging from the Google-related queries I get in my inbox every day, Google’s expanding efforts to warn users about perceived security issues are sowing increasing confusion and in some cases serious concerns, especially among nontechnical users who depend upon Google’s products and services in their daily lives.

A new example popped up today that I’ll get to in a moment, but I’ve been discussing these issues for quite a while, e.g.:

“When Google’s Chrome Security Warnings Can Do More Harm Than Good” –https://lauren.vortex.com/archive/001157.html

and:

“Here’s Where Google Hid the SSL Certificate Information That You May Need” –
https://lauren.vortex.com/2017/01/28/heres-where-google-hid-the-ssl-certificate-information-you-may-need

In a nutshell, Google’s continuing efforts at increasing user security — while utterly justifiable at the technical level — continue to marginalize many users who don’t really understand what Google is doing, are confused by Google’s security and other warnings, can’t effectively influence websites with “poor” security to make security improvements, and have no alternatives to accessing those sites in any case.

These are real people — I believe many millions of them — and I do not believe that Google really understands how important they are and how Google is leaving them behind.

Today brought yet another illustrative example that yes, even confused me for a time.

It involves cat food.

A friend forwarded me an email from PetSmart that included a link for an individualized 30% off coupon that they intended to use to buy cat food. That’s a damned good coupon, especially for those of us who aren’t rolling in dough. I wish I had a coupon like that today for Leela the Siamese Snowshoe.

The concern with this email was that every time the user clicked on the link in Gmail to access the site where the coupon could be printed, Gmail popped a modal security warning:

“Suspicious link – This link leads to an untrusted site. Are you sure you want to proceed to click email-petsmart.com?”

You can see a screenshot at the bottom of this post.

The obvious questions: What the hell does “suspicious link” mean in this context? What does Google mean by “untrusted site” in this scope?

There are no links to explanations, and if you Google around you can find lots of people asking similar questions about this class of Gmail warning, but no definitive answers, just lots of (mostly uninformed) speculation.

So I spent about 15 minutes digging this one down. Is email-petsmart.com a phishing domain targeting PetSmart users? Apparently not. It’s registered to ExactTarget, Inc. and has been registered since 2012. So while there’s no obvious authoritative mention of PetSmart there, my experience leads me to believe that they’re most likely a legit marketing partner of PetSmart, providing those emails and coupon services.

Of course, I still have no information about why Google is tagging them as suspicious. Is it the lack of https: security on the URL? Is it some aspect of their email-petsmart naming schema?

Damned if I know. Google isn’t telling me. And how would the average non-techie be expected to unravel any of this?

I told the user to go ahead and click the link. They got their coupon. Their kitties should be happy.

I’m not happy.

In the real world, most users don’t understand this stuff at the level they need to make truly informed decisions. So they’re forced — simply to get on with their lives every day — to click through such warnings blindly, to get to where they need to go.

And make no mistake about it, these kinds of scenarios are teaching these users absolutely abysmal security habits.

Google is terrific at tech. But Google is still struggling when it comes to understanding the broad range of their users and those users’ needs — particularly the non-techies — and especially how to communicate with those users effectively.

Google can do much better.

–Lauren–

Fighting Government Crippled Encryption by Turning It Off Entirely!

Within hours of the terrible terrorist attack in Manchester earlier this week, UK politicians were already using the tragedy as a springboard to push their demands that Internet firms cripple their encryption systems and deploy a range of other Orwellian measures that would vastly weaken the privacy and security of honest citizens — while handing terrorists and other criminals the keys to our private lives, including our financial and other personal information.

This same thuggish mindset is taking root in other parts of the world, often combined with hypocritical “data localization” requirements designed to make individual nations’ citizens as vulnerable as possible to domestic surveillance operations.

There are basically four ways in which firms can react to these draconian government demands.

They could simply comply, putting their users at enormous and escalating risk, not only from government abuse but also from criminals who would exploit the resulting weak encryption environments (while using “unapproved” strong encryption to protect their own criminal activities). We could expect some firms to go this route in an effort to protect their financial bottoms lines, but from an ethical and user trust standpoint this choice is devastating.

Firms could refuse to comply. Litigation might delay the required implementation of crippled encryption, or change some of its parameters. But in the final analysis, these firms must obey national laws where they operate, or else face dramatic fines and other serious sanctions. Not all firms would have the financial ability to engage in this kind of battle — especially given the very long odds of success.

Of course, firms could indeed choose to withdraw from such markets, perhaps in conjunction with geoblocking of domestic users in those countries to meet government prohibitions against strong encryption. Pretty awful prospects.

There is another possibility though — that I’ll admit up front would be highly controversial. Rather than crippling those designated encryption systems in those countries under government orders, firms could choose to disable those encryption systems entirely!

I know that this sounds counterintuitive, but please hang with me for a few minutes!

In this context we’re talking mainly about social media systems where (at least currently) there are no government requirements that messages and postings be encrypted at all. For example, we’re not speaking here of financial or medical sites that might routinely have their own encryption requirements mandated by law (and frankly, where governments usually already have ways of getting at that data).

What governments want now is the ability to spy on our personal Internet communications, in much the same manner as they’ve long spied on traditional telephone voice communications.

An axiom of encryption is that in most situations, weak encryption can be much worse for users than no encryption at all! This may seem paradoxical, but think about it. If you know that you don’t have any encryption at all, you’re far more likely to take care in what you’re transmitting through those channels, since you know that they’re vulnerable to spying. If you believe that you’re protected by encryption, you’re more likely to speak freely.

But the worst case is if you believe that you’re protected by encryption but you really aren’t, because the encryption system is purposely weak and crippled. Users in this situation tend to keep communicating as if they were well protected, when in reality they are highly vulnerable.

Perhaps worse, this state of affairs permits governments to give lip service to the claim that they favor encryption — when in reality the crippled encryption that they permit is a horrific security and privacy farce.

So here’s the concept. If governments demand weak encryption, associated legal battles have ended, and firms still want to serve users in the affected countries, then those firms should consider entirely disabling message/posting encryption on those social media platforms in the context of those countries — and do so as visibly and loudly as possible.

This could get complicated quickly when considering messages/posts that involve multiple countries with and without encryption restrictions, but basically whenever user activities would involve nations with those restrictions, there should be warnings, banners, perhaps even some obnoxious modal pop-ups — to warn everyone involved that these communications are not encrypted — and to very clearly explain that this is the result of government actions against their own citizens. 

Don’t let governments play fast and loose with this. Make sure that users in those countries — and users in other countries that communicate with them — are constantly reminded of what these governments have done to their own citizens.

Also, strong third-party encryption systems not under government controls would continue to be available, and efforts to make these integrate more easily with the large social media firms’ platforms should accelerate.

This is all nontrivial to accomplish and there are a variety of variations on the basic concept. But the goal should be to make it as difficult as possible for governments to mandate crippled encryption and then hypocritically encourage their citizens to keep communicating on these systems as if nothing whatever had changed.

We all want to fight terrorism. But government mandates weakening encryption are fundamentally flawed, in that over time they will not be effective at preventing evildoers from using strong encryption, but do serve to put all law-abiding citizens at enormous risk.

We must resist government efforts to paint crippled encryption targets on the backs of our loved ones, our broader societies, and ourselves.

–Lauren–

 

Is Google’s New “Store Sales Measurement” System a Privacy Risk?

Within hours of Google announcing their new “Store Sales Measurement” system, my inbox began filling with concerned queries. I held off responding on this until I could get additional information directly from Google. With that now in hand I feel comfortable in addressing this issue.

Executive Summary: I don’t see any realistic privacy problems with this new Google system.

In a nutshell, this program — similar in some respects to a program that Facebook has been operating for some time — provides data to advertisers that helps them determine the efficacy of their ads displayed via Google when purchases are not made online.

The crux of the problem is that an advertiser can usually determine when there are clicks on ads that ultimately convert to online purchases via those ads. But if ads are clicked and then purchases are made in stores, that information is routinely lost.

Our perception of advertising has always been complex — to call it love/hate would be a gross understatement. But the reality is that all of this stuff we use online has to be paid for somehow, even though we’ve come to expect most it to be free of direct charges.

And with the rise of ad blockers, advertisers are more concerned than ever that their ads are relevant and effective (and all else being equal, studies show that most of us prefer relevant ads to random ones).

Making this even more complicated is that the whole area of ad personalization is rife with misconceptions.

For example, the utterly false belief that Google sells the personal information of their users to advertisers continues to be widespread. But in fact, Google ad personalization takes place without providing any personal data to advertisers at all, and Google gives users essentially complete control over ad personalization (including the ability to disable it completely), via their comprehensive settings at:

https://www.google.com/settings/ads

Google’s new Store Sales Measurement system operates without Google obtaining individual users’ personal purchasing data. The system is double-blind and deals only with aggregated information about the value of total purchases. Google doesn’t learn who made a purchase, what was purchased, or the individual purchase prices. 

Even though this system doesn’t involve sharing of individual users’ personal data, an obvious question I’ve been asked many times over the last couple of days is: “Where did I give permission for my purchase data to be involved in a program like this at all, even if it’s only in aggregated and unidentified forms?”

Frankly, that’s a question for the bank or other financial institution that issues your credit or debit card — they’re the ones that have written their own foundational privacy policies. 

But my sense is that Google has bent over backwards to deploy their new system with additional layers of user privacy protections that go far beyond the typical privacy policies of those institutions themselves.

My bottom line on all this is that, yeah, I understand why many persons are feeling a bit nervous about this kind of system. But in the real world, we still need advertising to keep the Web going, and when a firm has jumped through the hoops as Google has done to increase the value of their advertising without negatively impacting user privacy in the process, I really don’t have any privacy or other associated concerns.

I only wish that all firms showed this degree of diligence.

Don’t hold your breath waiting for that.

–Lauren–

The Coming Fascist Internet

Originally posted November 13, 2011

Around four decades ago or so, at the U.S. Defense Department funded ARPANET’s first site at UCLA — what would of course become the genesis of the global Internet — I spent a lot of time alone in the ARPANET computer room. I’d work frequently at terminals sandwiched between two large, noisy, minicomputers, a few feet from the first ARPANET router — Interface Message Processor (IMP) #1, which empowered the “blindingly fast” 56 Kb/s ARPANET backbone. Somewhere I have a photo of the famous “Robby the Robot” standing next to that nearly refrigerator-sized cabinet and its similarly-sized modem box.

I had a cubicle I shared elsewhere in the building where I also worked, but I kept serious hacker’s hours back then, preferring to work late into the night, and the isolation of the computer room was somehow enticing.

Even the muted roar of the equipment fans had its own allure, further cutting off the outside world (though likely not particularly good for one’s hearing in the long run).

Occasionally in the wee hours, I’d shut off the room’s harsh fluorescent lights for a minute or two, and watch the many blinking lights play across the equipment racks, often in synchronization with the pulsing and clicking sounds of the huge disk drives.

There was a sort of hypnotic magic in that encompassing, flickering darkness. One could sense the technological power, the future coiled up like a tight spring ready to unwind and energize many thousands of tomorrows.

But to be honest, there was little then to suggest that this stark room — in conjunction with similar rooms scattered across the country at that time — would trigger a revolution so vast and far-reaching that governments around the world, decades later, would cower in desperate efforts to leash it, to cage its power, to somehow turn back the clock to a time when communications were more firmly under the thumbs of the powers-that-be.

There were some clues. While it was intended that the ARPANET’s resource sharing capabilities would be the foundation of what we now call the “cloud,” the ARPANET was (somewhat to the consternation of various Defense Department overseers) very much a social space from the beginning.

Starting very early on, ARPANET communications began including all manner of personal discussions and interests, far beyond the narrow confines of “relevant” technical topics. A “wine tasting enthusiasts” mailing list triggered reprimands from DoD when it became publicly known thanks to a magazine article, and I won’t even delve here into the varied wonders of the “network hackers” and “mary hartman” mailing lists.

In fact, the now ubiquitous mailing list “digest” format was originally invented as a “temporary” expedient when “high volumes” of traffic (by standards of the time) threatened the orderly distribution of the science-fiction and fantasy oriented “sf-lovers” mailing list. Many other features that we take for granted today in email systems were created or enhanced largely in reaction to these sorts of early “social” communications on the very young Net.

The early ARPANET was mostly restricted to the U.S., but as international points began to come online the wonders expanded. I still remember the day I found myself in a “talk” (chat) link with a party at a military base in Norway — my first international live contact on the Net that I knew of. I remember thinking then that someday, AT&T was going to start getting concerned about all this.

The power of relatively unfiltered news was also becoming apparent back then. One of my projects involved processing newswire data (provided to me over the ARPANET on a friendly but “unofficial” basis from another site) and building applications to search that content and alert users (both textually and via a synthesized voice phone-calling system — one of my other pet projects) about items of interest.

For much of the Net’s existence, both phone companies and governments largely ignored (or at least downplayed) the ARPANET, even as it evolved toward the Internet of today.

AT&T and the other telcos had explicitly expressed disinterest early on, and even getting them to provide the necessary circuits had at times been a struggle. Governments didn’t really seem to be worried about an Internet “subculture” that was limited mostly to the military, academia, and a variety of “egghead” programmers variously in military uniforms and bell-bottoms, whether sporting crew cuts, scruffy longhairs, or somewhere in-between.

But with the fullness of time, the phone companies, cable companies, governments, and politicians galore came to most intensely pay attention to the Internet, as did the entertainment industry behemoths and a broad range of other “intellectual property” interests.

Their individual concerns actually vary widely at the detailed level, but in a broader context their goals are very much singular in focus.

They want to control the Internet. They want to control it utterly, completely, in every technologically possible detail (and it seems in various technically impossible ways as well).

The freedom of communications with which the Internet has empowered ordinary people — especially one-to-many communications that historically have been limited to governments and media empires themselves — is viewed as an existential threat to order, control, and profits — that is, to historical centers of power.

Outside of the “traditional” aspects of government control over their citizenries, another key element of the new attempts to control the Net are desperate longings by some parties to turn back the technological clock to a time when music, movies, plus other works could not so easily be duplicated and disseminated in both “authorized” and “unauthorized” fashions.

The effective fall of copyright in this context was preordained by human nature (we are physical animals, and the concept of non-physical “property” plays against our natures) and there’s been a relentless “march of bits” — with text, music, and movies entering the fray in turn as ever more data could be economically stored and transferred.

In their efforts to control people and protect profits, governments and associated industries (often in league with powerful Internet Service Providers — ISPs — who in some respects are admittedly caught in the middle), seem willing to impose draconian, ultimately fascist censorship, identification, and other controls on the Internet and its users, even extending into the basic hardware in our homes and offices.

I’ve invoked fascism in this analysis, and I do not do so lightly.

The attacks on fundamental freedoms to communicate that are represented by various government repression of the Internet around the world, and in the U.S. by hypocritical legislation like PROTECT IP and SOPA (E-PARASITE), are fundamentally fascist in nature, despite between wrapped in their various flags of national security, anti-piracy profit protection, motherhood, and apple pie.

Anyone or anything that is an enabler of communications not willingly conforming to this model are subject to attack by authorities from a variety of levels — with the targets ranging from individuals like you and me, to unbiased enablers of organic knowledge availability like Google.

For all the patriotic frosting, the attacks on the Internet are really attacks on what has become popularly known as the 99%, deployed by the 1% powers who are used to having their own way and claiming the largest chunks of the pie, regardless of how many ants (that’s us!) are stomped in the process.

This is not a matter of traditional political parties and alliances. In the U.S., Democrats and Republican legislators are equally culpable in these regards.

This is a matter of raw power that transcends other ideologies, of the desire of those in control to shackle the Internet to serve their bidding, while relegating free communications for everyone else to the dustbin of history.

It is very much our leaders telling us to sit down, shut up, and use the Internet only in the furtherance of their objectives — or else.

To me, these are the fundamental characteristics of a fascist world view, perhaps not in the traditional sense but clearly in the ultimate likely impacts.

The Internet is one of the most important tools ever created by mankind. It certainly ranks with the printing press, and arguably in terms of our common futures on this tiny planet perhaps even with fire.

The question is, are we ready and willing to fight for the Net as it should be in the name of civil rights and open communications? Or will we sit back compliantly, happily gobble down the occasional treats tossed in our direction, and watch as the Internet is perverted into a monstrous distortion to control speech and people alike, rather than enabling the spread of freedom.

Back in that noisy computer room so many years ago, I couldn’t imagine that I was surrounded by machines and systems that would one day lead to such a question, and to concerns of such import.

The blossoming we’ve seen of the Internet was not necessarily easy to predict back then. But the Internet’s fascist future is much more clear, unless we fight now — right now — to turn back the gathering evil.

–Lauren–

Netflix Blocking, Google, Android, and Donald Trump

Netflix has now confirmed that they have begun blocking Android phones that have been rooted and/or even have unlocked bootloaders from downloading the Netflix app from the Google Play Store. While the app can still be sideloaded and still runs, we can reasonably assume that this is a temporary reprieve in those respects.

Let’s be crystal clear about what’s happening here. Google is moving their Android security framework in directions that will encourage popular app creators to broadly refuse installation on rooted/bootloader-unlocked phones.

This will inevitably put all users at greater risk by making it impossible in a practical sense for most concerned users to modify their phones for protection against malware, spyware, and government intrusions.

Despite the valiant efforts of Google toward making the Android environment a safe one, we are living in a time where a sociopathic fascist controls the federal government. We cannot tolerate total control of our phones being in the hands of any individual firms, even benign ones like Google.

I’ll have more to say about this. Much more.

–Lauren–

WARNING: Antivirus sites may be helping to SPREAD the current global malware ransomware (WannaCry) attack!

It has been reported that a researcher discovered that spread of the current worldwide ransomware attack can be halted after he registered the domain:

iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com

and built a sinkhole website that the malware could check. Reportedly the malware does not continue spreading if it can reach this site. HOWEVER, various antivirus websites/services are now reportedly adding that domain to their “bad domain” lists! If sites infected with this malware are unable to reach that domain due to their firewalls incorporating rules from antivirus sites that include a block for that domain, the malware will likely continue spreading across their vulnerable computers (which must also still be patched to avoid infection by similar exploits). Your systems MUST be able to access the domain above if this malware blocking trigger is to be effective, according to the current reports that I’m receiving!

–Lauren–

Announcing the “Google Issues” Mailing List

UPDATE (12 May 2017): Readers have been asking me about this new list’s scope. To be clear, it is not an “announcement-only” list. Reader participation is very much encouraged, including Google-related questions. Thanks again!

– – –

Nobody can accuse me of starting too many Internet mailing lists. My existing lists (PRIVACY Forum, PFIR, and NNSquad) have been running continuously on the order of 26, 19, and 11 years respectively. Remarkably, I routinely get notes from subscribers who have been on these lists since their creation and claim to have been reading all of my associated messages — apparently without suffering any obvious brain damage to date.

Even relatively new readers will know by now that postings relating to Google have long been a very frequent component of these lists, and of my blog (which itself is around 14 years old).

The volume of Google-related postings seems likely to only be increasing. So with hopefully only relatively minor risk to the spacetime continuum, I have created a new mailing list to deal exclusively with all manner of Google-centric issues (and associated Alphabet, Inc. topics as well).

The subscription page (and archive information) for this new moderated mailing list is at:

https://vortex.com/google-issues

While a variety of postings specific to Google will continue to appear in my other mailing lists as well, this new list is my intended venue for additional wide-ranging discussions and other postings related to Google and Alphabet, that I believe will be of ongoing interest — much of which will not appear in my other lists.

Google of course has no role in the operation of my lists or blog, and while I have consulted to them in the past I am not currently doing so — all of my opinions expressed in my lists and other venues are mine alone.

I’m looking forward to seeing you over on the Google Issues mailing list!

Thanks very much.

–Lauren–

Google’s Achilles’ Heel

A day rarely passes when somebody doesn’t send me a note asking about some Google-related issue. These are usually very specific cases — people requesting help for some particular Google product or often about account-related issues. Sometimes I can offer advice or other assistance, sometimes I can’t. Occasionally in the process I get pulled into deeper philosophical discussions regarding Google.

That’s what happened a few days ago when I was asked the straightforward question: “What is Google’s biggest problem?”

My correspondent apparently was expecting me to reply with a comment about some class of technical issues, or perhaps something about a security or privacy matter. So he was quite surprised when I immediately suggested that Google’s biggest problem has nothing per se to do with any of those areas at all.

Google’s technology is superb. Their privacy and security regimes are first-rate and world class. The teams that keep all those systems going are excellent, and I’ve never met a Googler that I didn’t like (well … hardly ever). It’s widely known that I take issue with various aspects of Google’s user support structure and user interface designs, but these are subject to improvement in relatively straightforward ways.

No, Google’s biggest problem isn’t in any of these areas.

Ironically, while Google has grown and improved in so many ways since its founding some 18 years ago, the big problem today remains essentially the same as it did at the beginning.

To use the vernacular, Google’s public relations — their external communications — can seriously suck.

That is not to suggest that the individuals working Google PR aren’t great people. The problem with Google PR is — in my opinion — a structural, cultural dilemma, of the sort that can be extremely difficult for any firm to significantly alter.

This is a dangerous state of affairs, both for Google and its users. Effective external communications ultimately impact virtually every aspect of how individuals, politicians, and governments view Google services and Google itself more broadly. In an increasingly toxic political environment around the world, Google’s institutional tendency —  toward minimal communications in so many contexts — creates an ideal growth medium for Google adversaries and haters to fill the perceived information vacuum with conspiracy theories and false propaganda.

For example, I recently posted Quick Tutorial: Deleting Your Data Using Google’s “My Activity” — which ended up appearing in a variety of high readership venues. Immediately I started seeing comments and receiving emails questioning how I could possibly know that Google was telling the truth about data actually being deleted, in many cases accompanied by a long tirade of imagined grievances against Google. “How can you trust Google?” they ask.

As it happens I do trust Google, and thanks to my period of consulting to them several years ago, I know how these procedures actually operate and I know that Google is being accurate and truthful. But beyond that general statement all I can say is “Trust me on this!”

And therein lies the heart of the dilemma. Only Google can speak for Google, and Google’s public preference for generalities and vagueness on many policy and technical matters is all too often much deeper than necessary prudence and concerns about “Streisand Effect” blowbacks would reasonably dictate.

Google’s external communications problem is indeed their “Achilles’ Heel” — a crucial quandary that if left unchanged will increasingly create the opportunity for damage to Google and its users, particularly at this time when misinformation, government censorship, and other political firestorms are burning widening paths around the globe.

Institutionally entrenched communications patterns cannot reasonably be changed overnight, and a great deal of business information is both fully appropriate and necessary to keep confidential.

But in the case of Google, even a bit more transparency in external communications could do wonders, by permitting the outside world to better understand and appreciate the hard work and diligence that makes Google so worthy of trust — and by leaving the Google haters and their lying propaganda in the dust.

–Lauren–

YouTube’s Dangerous and Sickening Cesspool of “Prank” and “Dare” Videos

UPDATE (December 17, 2017): A YouTube Prank and Dare Category That’s Vast, Disgusting, and Potentially Deadly

– – –

Before we delve into a particularly sordid layer of YouTube and its implications to individuals, society at large, and Google itself, I’ll make my standard confession. Overall, I’m an enormous fan of YouTube. I consider it to be one of the wonders of the 21st century, a seemingly limitless wellspring of entertainment, education, nostalgia, and all manner of other positive traits that I would massively miss if YouTube were to vanish from the face of the Earth. I know quite a few of the folks who keep YouTube running at Google, and they’re all great people.

That said, we’re increasingly finding ourselves faced with the uncomfortable reality that Google has seemingly dragged its collective feet when it comes to making sure that their own YouTube Terms of Service are equitably and appropriately enforced.

I’ve talked about an array of aspects relating to this problem over the years — including Content ID and copyright issues; YouTube channel suspensions, closures, and appeal procedures; and a long additional list that I won’t get into here again right now, other than to note that at Google/YouTube scale, none of this stuff is trivial to deal with properly, to say the least.

Recently the spotlight has been on YouTube’s hate speech problems, which I’ve discussed in What Google Needs to Do About YouTube Hate Speech and in a variety of other posts. This issue in particular has been in the news relating to the 2016 election, and due to a boycott of YouTube by advertisers concerned about their ads appearing alongside vile hate speech videos that (by any reasonable interpretation of the YouTube Terms of Service) shouldn’t be permitted on the platform in the first place.

But now I’m going to lift up another damp rock at YouTube and shine some light underneath — and it’s not pretty under there, either.

The issue in focus today is YouTube’s vast cornucopia of so-called “prank” – “dare” – “challenge” (PDC) videos, which range from completely innocuous and in good fun, to an enormous array of videos portraying vile, dangerous, harmful, and often illegal activities.

You may never have experienced this particular YouTube subculture. YouTube’s generally excellent recommendation engine tends to display new videos that are similar to the videos that you’ve already viewed, so unless you’ve looked for them, you could be completely forgiven for not even realizing that the entire PDC YouTube world even existed. But once you find them, YouTube will make sure that you’re offered a bountiful supply of new ones on a continuing basis.

This category of YouTube videos was flung into the mainstream news over the last few days, with a pair of egregious (but by no means isolated) examples.

In one case, a couple lost custody of young children due to an extensive series of horrific, abusive, “prank” videos targeting those children — that they’ve been publishing on YouTube over a long period. They’re now arguing that the abuse was “faked” — that the children agreed to do the videos, and so on.

But those claims don’t change the outcome of the equation — not in the least. First, young children can’t give meaningful, independent consent in such situations.

And here’s a key point that applies across the entire continuum of these YouTube videos — it usually doesn’t matter whether an abusive prank is faked or not. The negative impact on viewers is the same either way. Even if there is a claim that a vile “prank” was faked, how are viewers to independently judge the veracity of such a statement in many cases?

An obvious example category includes the YouTube “shock collar” prank/challenge videos. What, you didn’t know about those? Just do a YouTube search for:

shock collar

and be amazed. These are at the relative low end of the spectrum — you’re not terribly likely to be seriously injured by a shock collar, but there are indeed some nightmarish exceptions to that generalization.

So in this specific category you’ll find every imaginable combination of people “pranking” each other, challenging each other, and otherwise behaving like stupid morons with electricity in contact with their bodies.

Are all of these videos legit? Who the hell knows? I’d wager that some are faked but that most are real — but again as I noted above, whether or not such videos are faked or not isn’t the real issue. Potential copycats trying to outdo them won’t know or care.

Even if we consider the shock collar videos to be on the lower end of the relative scale under discussion, it quickly becomes obvious why such videos escalate into truly horrendous activities. Many of these YouTube channel operators openly compete with each other (or at least, claim to be competing — they could be splitting their combined monetization revenue between themselves for all we can tell from the outside) in an ever accelerating race to the bottom, with ever more vile and dangerous stunts.

While one can argue that we’re often just looking at stupid people voluntarily doing stupid things to each other, many of these videos still clearly violate Google’s Terms of Service, and it appears, anecdotally at least, that the larger your subscriber count the less likely that your videos will be subjected to a rigorous interpretation of those terms.

And then we have another example that’s currently in the news — the YouTube channel operator who thought it would be a funny “prank” to remove stop signs from intersections, and then record the cars speeding through. Not much more needs to be said about this, other than the fact that he was ultimately arrested and felony charged. Now he’s using his YouTube channel to try drum up funds for his lawyers.

One might consider the possibility that since he was arrested, that video might serve as an example of what others shouldn’t do. But a survey of “arrested at the end of doing something illegal” videos and their aftermaths suggests that the opposite result usually occurs — other YouTube channel operators are instead inspired to try replicate (or better yet from their standpoints, exceed) those illegal acts — without getting caught (“Ha ha! You got arrested, but we didn’t!”).

As in the case of YouTube hate speech, the key here is for Google to seriously and equitably apply their own Terms of Service, admittedly a tough (but doable!) job at the massive scale that Google and YouTube operate.

To not act proactively and effectively in this area is too terrible to risk. Non-USA governments are already moving to impose potentially draconian restrictions and penalties relating to YouTube videos. Even inside the USA, government crackdowns are possible since First Amendment protections are not absolute, especially if the existing Terms of Service are seen to be largely paper tigers.

These problems are by no means isolated only to YouTube/Google. But they’ve been festering below the surface at YouTube for years, and the public attention that they’re now receiving means that the status quo is no longer tenable.

Especially for the sake of the YouTube that I really do love so much, I fervently hope that Google starts addressing these matters with more urgency and effectiveness, rather than waiting for governments to begin disastrously dictating the rules.

–Lauren–