Google’s Horrible Plan to Flood Your Gmail with Political Garbage

UPDATE (25 January 2023): Google has announced that it will terminate this program at the end of this month (31 January 2023).

UPDATE (11 August 2022): The Federal Election Commission has now officially approved this Google plan.

UPDATE (3 August 2022): How to Fix Google’s Gmail Political Spam Bypass Plan

UPDATE (3 August 2022): A Federal Election Commission Draft APPROVES this plan. See: https://www.fec.gov/files/legal/aos/2022-14/202214.pdf

UPDATE (19 July 2022): Public comments on this proposal can now be viewed here on the Federal Election Commission site.

UPDATE (14 July 2022): The Federal Election Commission today extended the public comment period for this issue from a deadline of July 16 to a new ending date of August 5th. I have updated this post accordingly.

– – – – – –

Google is backed into a corner, and Google’s attempt to get out of this corner could be very bad for Gmail users. You have just a few weeks remaining to make your opinion known about this. Please read on.

While Google studiously avoids political bias, the GOP has been bitching for ages with the ludicrous claim that Google is purposely directing GOP political emails into Gmail users’ spam folders. The GOP asserts that Google directs more political emails from Republicans than from Democrats into the spam jail, and that this is because (the GOP claims) Google hates Republicans. 

Not true. The reason more GOP political emails end up in spam is that spam is exactly where most Gmail users want those emails to be.

While both Democrats and Republicans are guilty of sending unwanted, unsolicited political emails, the fact is that Republicans send more in quantity, and they tend to be more insidious, including traps like automatic recurring payments after supposedly one-time donations, and claims (like repeating Trump’s Big Lie about the 2020 election) that are misleading at best and often ludicrous and dangerous. This crap deserves to be in spam.

In an attempt to get out from under what are mostly GOP complaints, Google has asked the Federal Election Commission for approval for a plan to make emails from authorized candidate committees, political party committees and leadership political action committees registered with the FEC exempt from spam detection, as long they abide by Gmail’s rules on phishing, malware and illegal content.

There’s stuff in there about notifying users the first time that they get one of these emails from a campaign so that they can (supposedly) opt-out and other details. It doesn’t matter. This plan will bury many Gmail users under a mountain of stinking swill. 

Google’s plan will never work, for a couple of reasons.

One is that campaign and other political mailings multiply and spread like a hideous plague. I’ve had the unpleasant experience of helping a Gmail user clean up the mess created when they subscribed to a single political website, in this case, yes, a Trump site that later was found to be soliciting funds for one purpose but actually using them for something else entirely. Big surprise, huh? 

In almost no time at all, this had metastasized into political mailings from affiliated groups spouting lies and begging for money, mixed in with all manner of political-appearing phishing attempts and other scams. These were showing up in his Gmail literally every few minutes. An utter nightmare. This doesn’t happen only with that GOP — though they’re the larger culprit in this saga.

The second reason that the Google plan will fail is that it will never satisfy the GOP. They’ve already proposed legislation that would make it illegal to send political email into spam. They want you to see all of it, every single word, whether you want to see it or not, whether you ever asked to see it or not.

The bottom line is that the Google plan will result in your Gmail inbox being flooded with unsolicited political garbage, that you’ll need to sort through and try (good luck!) to unsubscribe to. Whether you’re a Democrat, a Republican, an Independent, or something else entirely, this probably isn’t how you really want to be spending your days.

Again, I realize that Google has been unfairly forced into this position, but that can’t and doesn’t give this plan a pass.

The Federal Election Commission is now allowing for public comments until August 5th regarding this terrible idea. You can email your comments to:

ao@fec.gov

Please note that such emails may become part of the publicly inspectable public record related to this issue.

It’s been many years since I’ve seen a worse proposal related to email spam, and it’s very unfortunate that Google has been forced into this situation. But that’s where we are, so speak now or forever hold your peace.

–Lauren–

My Thoughts About Google’s New Blog Post Regarding Health-Related Data Privacy

In my very recent post:

“Internet Users’ Safety in a Post-Roe World”

I expressed concerns regarding how Internet and telecommunications firms would protect women’s and others’ data in a post-Roe v. Wade world of anti-abortion states’ health data demands.

Google has now briefly blogged about this, at:

“Protecting people’s privacy on health topics”

The most notable part of the Google post is the announcement of this important change:

“Location History is a Google account setting that is off by default, and for those that turn it on, we provide simple controls like auto-delete so users can easily delete parts, or all, of their data at any time. Some of the places people visit — including medical facilities like counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others — can be particularly personal. Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.”

I definitely endorse this change, which aligns with the suggestions in my above referenced blog post regarding handling of sensitive location data. Thank you Google for taking this crucial action. This is an excellent start.

However, not yet publicly addressed by Google are the issues I noted regarding how these sensitive topics in search histories (both as stored by Google itself and/or on browsers) could also be abused by anti-abortion states hell-bent on pursuing women and others as part of those states’ extremist agendas, including in many instances abortion bans without exceptions for rape and incest.

Again, I praise Google for their initial step regarding location data, but there’s much more work still to do!

–Lauren–

Social Media Sites Should Be Required to ID Many Users

Greetings. I write the following with no joy whatsoever.

I have reluctantly come to the conclusion that it may be necessary to legislate that any social media user who wishes to have their posts seen by more than a small handful of users will need to be authenticated by any (significantly-sized) sites, using government IDs.

This identification information would be retained by the firms so long as the users are active and for some specified period afterwards. Users would *not* be required to use their real names for posts, but the linkages to their actual IDs would be available to authorities in cases of abuse under appropriate, precisely defined circumstances, subject to court oversight. 

This would include situations where a post may be forwarded to larger audiences by others, which will be a technical challenge to implement.

The ability to reach large audiences on today’s Internet should be a privilege, no longer a right.

It is very sad that it has come to this.

–Lauren–

Internet Users’ Safety in a Post-Roe World

UPDATE (1 July 2022): My Thoughts About Google’s New Blog Post Regarding Health-Related Data Privacy

UPDATE (24 June 2022): As expected, the U.S. Supreme Court today overturned Roe v. Wade, bringing the issues discussed below into immediate focus.

TL;DR: By no later than early this July, it is highly probable that a nearly half-century nationwide precedent providing women with abortion-related protections will be partly or completely reversed by the current U.S. Supreme Court (SCOTUS). This sea change, especially impacting women’s rights but with even broader implications now and into the future, would immediately and dramatically affect many policy and operational aspects of numerous important Internet firms. Unless effective planning for this situation takes place imminently, the safety of women, the well-being of Internet users more generally, and crucial services of these firms themselves will in all likelihood be at risk in critical respects.

– – – – – –

Since the recent leak of a SCOTUS draft decision that would effectively eliminate the national protections of Roe v. Wade, and subsequent remarks by some of the associated justices, it is now widely assumed that within a matter of days or weeks a partial or total reversal of Roe will revert the vast majority of abortion-related matters back to the individual states. 

Many politicians and states have already indicated their plans to immediately ban most or even all abortions, including in some cases those related to rape and incest, and even those to preserve the health of the woman, with only narrow exceptions even to save mothers’ lives. Some of these laws may effectively criminalize miscarriages. Some may introduce both civil and criminal penalties related to abortion, possibly bringing homicide or murder charges against involved parties, potentially including the pregnant women. 

Various states plan to try extending their bans and civil/criminal penalties to include anyone who “participates” in making abortions possible, even if they are in other states, as when a woman travels to a different state for an abortion (the legality of one state attempting to impact actions in another state in this manner is unclear, but with today’s SCOTUS no possibilities can be safely ignored). Actions by some states to try ban obtaining, ordering, or providing various abortion drugs are also already being enacted. Note that SCOTUS has to date permitted to continue the Texas mechanism for suing abortion providers, which has largely blocked abortions in that state.

“Trigger laws” already in place in some states along with the statements of state legislators indicate that near total or total abortion bans will immediately become law in various states if the anticipated SCOTUS decision is announced. 

Anti-abortion and affiliated factions are already planning — using the reasoning of the expected SCOTUS decision as a foundation — for follow-up actions pushing for national abortion bans, limits on contraception, banning gay marriage, rolling back LGBTQ+ rights, and related activities. U.S. Senate Republican Leader Mitch McConnell has recently proclaimed that a nationwide abortion ban is possible if the GOP retakes the House, Senate, and presidency. 

These events are creating what could become an existential threat to many Internet users and to key aspects of many Internet firms’ policy and operational models.

Given the sweeping and unprecedented scope of the oppressive laws that would be unleashed on pregnant women and anyone else who becomes involved with their healthcare, especially given the civil and even criminal penalties being written into these laws, it seems inevitable that demands for access to data in the possession of many Internet and telecommunications firms relating to user activities will drastically increase.

Search histories (both server and browser) and potentially even stored email data could be sought looking for queries about abortion services, abortion drugs, and numerous other related topics. Location data (both targeting specific users, and data from broader geofence warrants associated with, for example, abortion providers) could be demanded. A range of other resulting data demands are also highly probable. It is also expected that there would be even more calls for government-mandated backdoors into end-to-end encrypted messaging systems.

Women may put their health and lives at risk by not seeking necessary health services, for fear of these abortion laws. Women’s partners, other family members, friends, associates, and healthcare providers may reasonably believe that their livelihoods or freedom may compromised if they are found to be providing or aiding in any manner related to abortion services. 

Many users may cease using Internet and various telecommunications services in the manners that they previously would have, out of concerns that their related activities and other data could ultimately fall into the hands of state or other officials, and then be used to track and potentially prosecute them under these abortion-related laws.

This situation is a Trust & Safety emergency of the first order for all of these firms.

While some firms already provide users a range of search/location history control tools, I would assert that most users do not understand them and are frequently unaware of how they are actually configured.

I believe that the best mechanism at this time to help protect women and affiliated others who would be victimized by these state actions is to not save the associated data in the first place, unless a user decides that they desire to have that data saved.

One possibility would be for these firms to proactively offer users the option to not save (or alternatively, very quickly expunge) their search, location, and other user activity data associated with abortion and important related issues — both on company servers, and within browser histories if practicable. Users who wished to have any of these categories of data activity saved as before could choose not to exercise this option.

Unfortunately, a database of users who opt out of having this data saved may itself be an attractive data demand target by parties who may assume that it mainly represents individuals attempting to hide activities related to abortions. This possibility may argue for the preferred default behavior being to not save this data, and offering users the option of saving it if they so choose.

While these changes could be part of a desirable broader effort to give users more control over which specific aspects of their “personally sensitive” activity data are saved, this would of course be a significantly larger project, and time is of the essence given the imminent SCOTUS ruling. 

Obviously I am not here addressing the detailed legal considerations or potential technical implementation challenges of the proposals above, and there may exist other ways to quickly ameliorate the risks that I’ve described, though practical alternatives are not obvious to me at present.

However, I do feel strongly that the status quo regarding user activity data in a post-Roe environment could create a nightmarish situation for many women and other Internet users, and be extraordinarily challenging for firms from Trust & Safety and broader policy and operational aspects. 

I strongly recommend that actions be taken immediately to protect Internet users from the storm that will likely arrive very shortly indeed.

–Lauren–

Big Tech and the Internet Are Not Our Enemies

It seems like only a few years ago, the entire world was enamored of Big Tech and the Internet — and pretty much everyone was trying to emulate their most successful players. But now, to watch the news reports and listen to the politicians, the Internet and Big Tech are Our Enemies, responsible for everything from mass shootings to drug addiction, from depression to child abuse, and seemingly most other ills that any particular onlooker finds of concern in our modern world.

The truth is much more complex, and much more difficult to comfortably accept. For the fundamental problems we now face are not the fault of technology in any form, they are fully the responsibility of human beings. That is, as Pogo famously said, “We have met the enemy, and he is us.”

What’s more, most users of social media and other Internet services don’t realize how much they have to lose as a result of the often politically motivated faux “solutions” being proposed (and in some cases already passed into law) that could literally cripple many of the sites that billions of us have come to depend upon in our daily lives.

Hate speech, for example, was not invented by the Internet. While it can certainly be argued that social media increased its distribution, the intractable nature of the problem is clearly demonstrated by calls from the Right to leave most hate speech available as legal speech (at least in the U.S. — other countries have different legal standards regarding speech), while the Left (and many other countries) want hate speech removed even more rapidly. Both sides propose draconian penalties for failures to comply with their completely opposite demands.

In the U.S., some states have already passed laws explicitly prohibiting Big Tech from removing wide ranges of speech, much of which would be considered hateful and/or outright disinformation. These laws are currently unenforced due to court actions, but not on a permanent basis at this time.

The utter chaos that would be triggered by enforcement of such laws and associated attempts to undermine crucial Communications Decency Act Section 230 are obvious. If firms are required by law not to remove speech that they consider to be dangerous misinformation or hate speech, they will almost certainly find themselves cut off from key service providers that they need to stay in operation, who won’t want to keep doing business with them. Perhaps laws would then be passed to try require that those providers not cut off social media firms in such cases. But what of advertisers who do not wish to be associated with vile content? Laws to force them to continue advertising on particular sites are unlikely in the extreme.

Similar dilemmas apply to most other areas of Big Tech and the Internet that are now the subject of seemingly endless condemnation. There are calls for end-to-end encryption of chat systems and other direct messaging to protect private conversations from outside surveillance and tampering — but there are simultaneously demands that governments be able to see into these conversations to try detect child abuse or possible mass shooter events before they occur. Another enormous category of conflicting demands will arise as the U.S. Supreme Court drastically scales back fundamental protections for women.

Even if encryption were banned (a ban that we know would never be anywhere near 100% effective), the sheer scale of the Internet in general, and of social media in particular, are such that no currently imaginable combination of human beings and artificial intelligence could usefully scan and differentiate false positives from genuine threats among the nearly inconceivably enormous volumes of data involved. False positives have real costs — they divert scarce resources from genuine threats where those resources are desperately needed.

Big Tech now finds itself firmly between the proverbial rock and the hard place. Governments, politicians, and others are demanding changes that in many cases aren’t only in 180 degree opposition (“Take down violating posts faster! No, leave them up — taking them down is censorship!”), but are also calling for technologically impractical approaches to monitoring social media (both public postings and private messages/chats) at scale. Many of these demands would lead inevitably to requiring virtually all social media posts to be pre-moderated and pre-approved before being permitted to be seen publicly. Every public post. Every private chat. Every live stream throughout the totality of its existence.

Only in such or similar ways could social media firms meet the demands being strewn upon it, even if the inherent conflicts in demands from different groups and political factions could somehow be harmonized, even leaving aside associated privacy concerns.

But this is actually entirely academic at the kinds of scales at which users currently post to social media. Such pre-moderation is not possible in any kind of effective way without drastically reducing the total volume of user content that is made available.

This would leave Big Tech with only one likely practical path forward. Firms would need to drastically and dramatically reduce the amount of UGC (User Generated Content) that is submitted and publicly posted. All manner of postings — written, video, audio, prerecorded content and live streams, virtually everything that any user might want other users to see, would need to be curtailed. A tiny percentage compared with what is seen today might continue to be publicly surfaced after the required pre-moderation, but this would be a desert ghost town compared to today’s social media landscape.

There are some observers who upon reading this might think to themselves, “So what? To hell with social media! The Internet and the world will be better without it.” But this is fundamentally wrong. The ability of ordinary people to communicate with many others — without having to channel through traditional mass media gatekeepers — has been one of the most essential liberating aspects of the Internet. The appropriate responses to the abusive ways that some persons have chosen to use these capabilities do not include permitting governments to decimate a crucial aspect of the Internet’s empowerment of individuals.

Ultimately might governments expand their monitoring edicts to include email? Will attempts to ban VPNs become mainstream around the planet? There’s no reason to assume that governments demanding mass data surveillance would ultimately hesitate in any of these respects.

Of course, if this is what voters really want, it’s what their politicians will likely provide them. Possible alternatives that might help to limit some abuses — one suggestion at least worth discussing is requiring social media firms to confirm the identities of users posting to large groups before such postings are visible — may not be seriously considered. We shall see.

Unfortunately, most users of the Internet and social media are ill-informed about the realities of these situations. Most of what they are seeing on these topics is political rhetoric devoid of crucial technological contexts. They are purposely kept uninformed regarding the ramifications of the false “remedies” that some politicians and haters of Big Tech are spewing forth daily.

We are on the cusp of having major parts of our daily lives seriously disrupted by political demands that would wither away many of the services on the very sites that are so important to us all.

–Lauren–

How to Better Solve YouTube’s “Dislike Count” Problem

The controversy over the recently announced decision by YouTube to remove publicly viewable “Dislike” counts from all videos is continuing to grow. Many YT creators feel that the loss of a publicly viewable Like/Dislike ratio will be a serious detriment. I know that I consider that ratio useful.

There are some good arguments by Google/YouTube for this action, particularly relating to harassment campaigns targeting the Dislikes on specific videos. However, I believe that YouTube has gone too far in this instance, when a more nuanced approach would be preferable.

In particular, my view is that it is reasonable to remove the publicly viewable Dislike counts from videos by default, but that creators should be provided with an option to re-enable those counts on their specific videos (or on all of their videos) if they wish to do so.

With YouTube removing the counts by default, YouTube creators who are not aware of these issues will be automatically protected. But creators who feel that showing Dislike counts is good for them could opt to display them. Win-win!

–Lauren–

Apple Backdoors Itself

UPDATE (September 3, 2021): Apple has now announced that “based on feedback” they are delaying the launch of this project to “collect input and make improvements” before release.

– – –

Apple’s newly revealed plan to scan users’ Apple devices for photos and messages related to child abuse is actually fairly easy to explain from a high-level technical standpoint.

Apple has abandoned their “end-to-end” encrypted messaging promises. They’re gone. Poof! Flushed down the john. Because a communication system that supposedly is end-to-end encrypted — but has a backdoor built into user devices — is like being sold a beautiful car and discovering after the fact that it doesn’t have any engine. It’s fraudulent.

The depth of Apple’s betrayal of its users is not specifically in the context of dealing with child abuse — which we all agree is a very important issue indeed — but that by building any kind of backdoor mechanism into their devices they’ve opened the legal door to courts and other government entities around the world to make ever broader demands for secret, remote access to the data on your Apple phones and other devices. And even if you trust your government today with such power — imagine what a future government in whom you have less faith may do.

In essence, Apple has given away the game. It’s as if you went into a hospital to have your appendix removed, and when you awoke you learned that they also removed one of your kidneys and an eye. Surprise!

There is no general requirement that Apple (or other firms) provide end-to-end crypto in their products. But Apple has routinely proclaimed itself to be a bastion of users’ privacy, while simultaneously being highly critical of various other major firms’ privacy practices. 

That’s all just history now, a popped balloon. Apple hasn’t only jumped the shark, they’ve fallen into the water and are sinking like a stone to the bottom.

–Lauren–

Keep Governments Away from Social Media “Misinformation Control”

As the COVID “Delta” variant continues its spread around the globe, the Biden administration has deployed something of a basketball-style full-court press against misinformation on social media sites. That its intentions are laudable is evident and not at issue. Misinformation on social media and in other venues (such as various cable “news” channels), definitely play a major role in vaccine hesitancy — though it appears that political and peer allegiances play a significant role in this as well, even for persons who have accurate information about the available vaccines.

Yet good intentions by the administration do not necessarily always translate into optimum statements and actions, especially in an ecosystem as large and complex as social media. When President Biden recently asserted that Facebook is “killing people” (a statement that he later walked back) it raised many eyebrows both in the U.S. and internationally.

I implied above that the extent to which vaccine misinformation (as opposed to or in combination with other factors) is directly related to COVID infections and/or deaths is not a straightforward metric. But we can still certainly assert that Facebook has traditionally been an enormous — likely the largest — source of misinformation on social media. And it is also true, as Facebook strongly retorted in the wake of Biden’s original remark, that Facebook has been working to reduce COVID misinformation and increase the viewing of accurate disease and vaccine information on their platform. Other firms such as Twitter and Google have also been putting enormous resources toward misinformation control (and its subset of “disinformation” — which is misinformation being purposely disseminated with the knowledge that it is false).

But for those both inside and outside government who assert that these firms “aren’t doing enough” to control misinformation, there are technical realities that need to be fully understood. And key among these is this: There is no practical way to eliminate all misinformation from these platforms. It is fundamentally impossible without preventing ordinary users from posting content at all — at which point these platforms wouldn’t be social media any longer.

Even if it were possible for a human moderator (or humans in concert with automated scanning) to pre-moderate every single user posting before permitting them to be seen and/or shared publicly, differences in interpretation (“Is this statement in this post really misinformation?”), errors, and other factors would mean that some misinformation is bound to spread — and that can happen very quickly and in ways that would not necessarily be easily detected either by human moderators or by automated content scanning systems. But this is academic. Without drastically curtailing the amount of User Generated Content (UGC) being submitted to these platforms, such pre-moderation models are impractical.

Some other statements from the administration also triggered concerns. The administration appeared to suggest that the same misinformation standards should be applied by all social media firms — a concept that would obviously eliminate the ability of the Trust & Safety teams at these firms to make independent decisions on these matters. And while the administration denied that it was dictating to firms what content should be removed as misinformation, they did say that they were in frequent contact with firms about perceived misinformation. Exactly what that means is uncertain. The administration also said that a short list of “influencers” were responsible for most misinformation on social media — though it wasn’t really apparent what the administration would want firms to do with that list. Disable all associated accounts? Watch those accounts more closely for disinformation? I certainly don’t know what was meant.

But the fundamental nature of the dilemma is even more basic. For governments to become involved at all in social media firms’ decisions about misinformation is a classic slippery slope, for multiple reasons.

Even if government entities are only providing social media firms with “suggestions” or “pointers” to what they believe to be misinformation, the oversized influence that these could have on firms’ decisions cannot be overestimated, especially when some of these same governments have been threatening these same firms with antitrust and other actions.

Perhaps of even more concern, government involvement in misinformation content decisions could potentially undermine the currently very strong argument that these firms are not subject to First Amendment considerations, and so are able to make their own decisions about what content they will permit on their platforms. Loss of this crucial protection would be a big win for those politicians and groups who wish to prevent social media firms from removing hate speech and misinformation from their platforms. So ironically, government involvement in suggesting that particular content is misinformation could end up making it even more difficult for these firms to remove misinformation at all!

Even if you feel that the COVID crisis is reason enough to endorse government involvement in social media content takedowns, please consider for a moment the next steps. Today we’re talking about COVID misinformation. What sort of misinformation — there’s a lot out there! — will we be talking about tomorrow? Do we want the government urging content removal about various other kinds of misinformation? How do we even define misinformation in widely different subject areas?

And even if you agree with the current administration’s views on misinformation, how do you know that you will agree with the next administration’s views on these topics? If you want the current administration to have these powers, will you be agreeable to potentially a very different kind of administration having such powers in the future? The previous administration and the current one have vastly diverging views on a multitude of issues. We have every reason to expect at least some future administrations to follow this pattern.

The bottom line is clear. Even with the best of motives, governments should not be involved in content decisions involving misinformation on social media. Period.

–Lauren–

We Have Met the Ransomware Enemy, and It Is (Partly) Us!

Ransomware is currently a huge topic in the news. A crucial gasoline pipeline shuts down. A major meat processor is sidelined. It almost feels as if there are new announced ransomware attacks every few days, and there are certainly many such attacks that are never made public.

We see commentators claiming that ransomware attacks are the software equivalent of 9/11, and that perpetrators should be treated as terrorists. Over on one popular right-wing news channel, a commentator gave a literal “thumbs up” to the idea that ransomware perpetrators might be assassinated.

The Biden administration and others are suggesting that if Russia’s Putin isn’t responsible for these attacks, he at least must be giving his tacit approval to the ones apparently originating there. For his part, Putin is laughing off such ideas.

There clearly is political hay to be made from linking ransomware attacks to state actors, but it is certainly true that ransomware attacks can potentially have much the same devastating impacts on crucial infrastructure and operations as more “traditional” cyberattacks.

And while it is definitely possible for a destruction-oriented cyberattack to masquerade as a ransomware attack, it is also true that the vast majority of ransomware attacks appear to be aimed not at actually causing damage, but for the rather more prosaic purpose of extorting money from the targeted firms.

All this having been said, there is actually a much more alarming bottom line. The vast majority of these ransomware attacks are not terribly sophisticated in execution. They don’t need to depend on armies of top-tier black-hat hackers. They usually leverage well-known authentication weaknesses, such as corporate networks accessible without robust 2-factor authentication techniques, and/or firms’ reliance on outmoded firewall/VPN security models.

Too often, we see that a single compromised password gives attackers essentially unlimited access behind corporate firewalls, with predictably dire results.

The irony is that the means to avoid these kinds of attacks are already available — but too many firms just don’t want to make the efforts to deploy them. In effect, their systems are left largely exposed — and then there’s professed surprise when the crooks simply saunter in! There are hobbyist forums on the Net, having already implemented these security improvements, that are now actually better protected than many major corporations!

I’ve discussed the specifics many times in the past. The use of 2-factor (aka 2-step) authentication can make compromised username/password combinations far less useful to attackers. When FIDO/U2F security keys are properly deployed to provide this authentication, successful fraudulent logins tend rapidly toward nil.

Combining these security key models with “zero trust” authentication, such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), and security is even further enhanced, since no longer can an attacker simply penetrating a firewall or compromised VPN find themselves with largely unfettered access to targeted internal corporate resources.

These kinds of security tools are available immediately. There is no need to wait for government actions or admissions from Putin! And sooner rather than later, firms and institutions that continue to stall on deploying these kinds of security methodologies will likely find themselves answering ever more pointed questions from their stockholders or other stakeholders, demanding to know why these security improvements weren’t already made *before* these organizations were targeted by new highly publicized ransomware attacks!

–Lauren–

DeJoy Is Hell-Bent on Wrecking the Postal Service — and Maybe Your Life

While we’re all still reeling from the recent horrific, tragic. and utterly preventable incidents of mass shooting murders, inside the D.C. beltway today events are taking place that could put innumerable medically challenged Americans at deep risk — and the culprit is Louis DeJoy, the Postal Service (USPS) Postmaster General and Trump megadonor. 

His 10-year plan for destroying the USPS, by treating it like his former for-profit shipping logistics business rather than the SERVICE is was intended to be — was released today, along with a flurry of self-congratulatory official USPS tweets that immediately attracted massive negative replies, most of them demanding that DeJoy be removed from his position. Now. Right now!

I strongly concur with this sentiment.

Even as first class and other mail delays have already been terrifying postal customers dependent on the USPS for critical prescription medications and other crucial products, DeJoy’s plan envisions even longer mail delays — including additional days of delay for delivery of local first class mail, banning first class mail from air shipping, raising rates, cutting back on post office hours, and — well, you get the idea.

Fundamentally the plan is simple. Destroy the USPS via the “death by a thousand cuts” — leaving to slowly twist in the wind those businesses and individuals without the wherewithal to rely on much more expensive commercial carriers.

While President Biden has taken some initial steps regarding the USPS by appointing several new appointees to the USPS board of governors (who need to be confirmed by the Senate), and this could lead to the ability for the ultimate ousting of DeJoy (since only the board can fire him directly), we do not have the time for this process to play out.

Biden has apparently been reluctant to take the “nuclear option” of firing DeJoy’s supporters on the board — they can be fired “for cause” — but many observers assert that their complicity in this DeJoy plan to wreck USPS services would be cause enough.

One thing is for sure. The kinds of changes that DeJoy is pushing through would be expensive and time consuming to unwind later on. And in the meantime, everybody — businesses and ordinary people alike — will suffer greatly at DeJoy’s hands. 

President Biden should act immediately to take any and all legal steps to get DeJoy out of the USPS before DeJoy can do even more damage to us all.

–Lauren–

How the “News Link Wars” Could Wreck the Web

As it stands right now, major news organizations — in league with compliant politicians around the world — seem poised to use the power of their national governments to take actions that could absolutely destroy the essentially open Web, as we’ve known it since Sir Tim Berners-Lee created the first operational web server and client browser at CERN in 1990.

Australia — home of the right-wing Rupert Murdoch empire — is in the lead of pushing this nightmarish travesty, but other countries around the world are lining up to join in swinging wrecking balls at Web users worldwide. 

Large Internet firms like Facebook and Google, feeling pressure to protect their income streams more than to protect their users, are taking varying approaches toward this situation, but the end result will likely be the same in any case — users get the shaft.

The underlying problem is that news organizations are now demanding to be paid by firms like Google and Facebook merely for being linked from them. The implications of this should be obvious — it creates the slippery slope where more and more sites of all sorts around the world would demand to be paid for links, with the result that the largest, richest Internet firms would likely be the last ones standing, and competition (along with choices available to users) would wither away. 

The current situation is still in considerable flux — seemingly changing almost hour by hour — but the trend lines are clear. Google had originally taken a strong stance against this model, rightly pointing out how it could wreck the entire concept of open linking across the Web, the Web’s very foundation! But at the last minute, it seems that Google lost its backbone, and has been announcing payoff deals to Murdoch and others, which of course will just encourage more such demands. At the moment Facebook has taken the opposite approach, and has literally cut off news from their Australian users. The negative collateral effects that this move has created make it unlikely that this can be a long-term action.

But what we’re really seeing from Facebook and Google (and other large Internet firms who are likely to be joining their ranks in this respect) — despite their differing approaches at the moment — is essentially their floundering around in a kind of desperation. They don’t really want (and/or don’t know how) to address the vast damage that will be done to the overall Web by their actions, beyond their own individual ecosystems. From a profit center standpoint this arguably makes sense, but from the standpoint of ordinary users worldwide it does not.

To use the vernacular, users are being royally screwed, and that screwing has only just begun.

Some observers of how the news organizations and their government sycophants are pushing their demands have called these actions blackmail. There is one universal rule when dealing with blackmailers — no matter how much you pay them, they’ll always come back demanding more. In the case of the news link wars, the end result if the current path is continued, will be their demands for the entire Web — users be damned.

–Lauren–

The Big Lie About “Cancel Culture” and Demands to Change Section 230

Claims of “cancel culture” seems to be everywhere these days. Almost every day, we seem to hear somebody complaining that they have been “canceled” from social media, and pretty much inevitably there is an accompanying claim of politically biased motives for the action.

The term “cancel culture” itself appears to have been pretty much unknown until several years ago, and seems to have morphed from the term “call-out culture” — which ironically is generally concerned with someone getting more publicity than they desire, rather than less.

Be that as it may, cancel culture complaints — the lions’ share of which emanate from the political right wing — are now routinely used to lambaste social media and other Internet firms, to assert that their actions are based on political statements with which the firms do not agree and (according to these accusations) seek to suppress.

However, even a casual inspection of these claims suggest that the actual issues in play are hate speech, violent speech, and dangerous misinformation and disinformation — not political viewpoints, and formal studies reinforce this observation, e.g. False Accusation: The Unfounded Claim that Social Media Companies Censor Conservatives.

Putting aside for now the fact that the First Amendment does not apply to other than government actions against speech, even a cursory examination of the data reveals — confirmed by more rigorous analysis — not only that right-wing entities are overwhelmingly the source of most associated dangerous speech (though they are by no means the only source, there are sources on the left as well), but conservatives overall still have prominent visibility on social media platforms, dramatically calling into question the claims of “free speech” violations overall.

Inexorably intertwined with this are various loud, misguided, and dangerous demands for changes to (and in some cases total repeal of) Communications Decency Act Section 230, the key legislation that makes all forms of Internet UGC — User Generated Content — practical in the first place.

And here we see pretty much equally unsound proposals (largely completely conflicting with each other) from both sides of the political spectrum, often apparently based on political motives and/or a dramatic ignorance of the negative collateral damage that would be done to ordinary users if such proposals were enacted.

The draconian penalties associated with various of these proposals — aimed at Internet firms — would almost inevitably lead not to the actually desired goals of the right or left, but rather to the crushing of ordinary Internet users, by vastly reducing (or even eliminating entirely) the amount of their content on these platforms — that is, videos they create, comments, discussion forms, and everything else users want to share with others.

The practical effect of these proposals would be not to create more free speech or simply reduce hate and violent speech, misinformation and disinformation, but to make it impractical for Internet platforms to support user content — which is vast in scale beyond the imagination of most persons — in anything like the ways it is supported today. The risks would just be too enormous, and methodologies to meet the new demanded standards — even if we assume the future deployment of advanced AI systems and vast new armies of proactive moderators — do not exist and likely could never exist in a practical and affordable manner.

This is truly one of those “be careful what you wish for” moments, like asking the newly-released genie to “fix social media” and with a wave of his hand he eliminates the ability of anyone in the public — prominent or not, on the right or the left — to share their views or other content.

So as we see, complaints about social media are being driven largely by highly political arguments, but in reality invoke enormously complex technical challenges at gigantic scales — many of which we don’t even fundamentally understand given the toxic political culture of today.

As much as nobody would likely argue that Section 230 is perfect, I have yet to see any realistic proposals to change it that would not make matters far worse — especially for ordinary users who largely don’t understand how much they have to lose in these battles. 

Like democracy itself, which has been referred to as “the worst possible system of governance, except for all the others” — buying into the big lie of cancel culture and demands to alter Section 230 is wrong for the Internet and would be terrible for its users.

–Lauren–

The Challenges of Moderating User Content on the Internet (and a Bit of History)

I increasingly suspect that the days of large-scale public distribution of unmoderated UGC (User Generated Content) on the Internet may shortly begin drawing to a close in significant ways. The most likely path leading to this over time will be a combination of steps taken independently by social media firms and future legislative mandates.

Such moderation at scale may follow the model of AI-based first-level filtering, followed by layers of human moderators. It seems unlikely that today’s scale of postings could continue under such a moderation model, but future technological developments may well turn out to be highly capable in this realm.

Back in 1985 when I launched my “Stargate” experiment to broadcast Usenet Netnews over the broadcast television vertical blanking interval of national “Superstation WTBS,” I decided that the project would only carry moderated Usenet newsgroups. Even more than 35 years ago, I was concerned about some of the behavior and content already beginning to become common on Usenet. My main related concerns back then did not involve hate speech or violent speech — which were not significant problems on the Net at that point — but human nature being what it is I felt that the situation was likely to get much worse rather than better.

What I had largely forgotten in the decades since then though, until I did a Google search on the topic today (a great deal of original or later information on Stargate is still online, including various of my relevant messages in very early mailing list archives that will likely long outlive me), is the level of animosity about that decision that I received at the time. My determination for Stargate to only carry moderated groups triggered cries of “censorship,” but I did not feel that responsible moderation equated with censorship — and that is still my view today.

And now, all these many years later, it’s clear that we’ve made no real progress in these regards. In fact, the associated issues of abuse of unmoderated content in hateful and dangerous ways makes the content problems that I was mostly concerned about back then seem like a soap bubble popping, compared with a nuclear bomb detonating now.

We must solve this. We must begin serious and coordinated work in this vein immediately. And my extremely strong preference is that we deal with these issues together as firms, organizations, customers, and users — rather than depend on government actions that, if history is any guide, will likely do enormous negative collateral damage.

Time is of the essence.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab (repost with new introduction)

The post below was originally published on 10 August 2019. In light of recent events, particularly the storming of the United States Capital by a violent mob — resulting in five deaths — and subsequent actions by major social media firms relating to the exiting President Donald Trump (terms of service enforcement actions by these firms that I do endorse under these extraordinary circumstances), I feel that the original post is again especially relevant. While the threats of moves by the Trump administration against  CDA Section 230 are now moot, it is clear that 230 will be a central focus of Congress going forward, and it’s crucial that we all understand the risks of tampering with this key legislation that is foundational to the availability of responsible speech and content on the Internet. –Lauren–

– – – – – – – – –  –

The Right’s (and Left’s) Insane Internet Content Power Grab
(10 August 2019)

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Recommendation: Do Not Install or Use Centralized Server Coronavirus (COVID-19) Contact Tracing Apps

Everyone, I hope you and yours are safe and well during this unprecedented pandemic.

As I write this, various governments are rushing to implement — or have already implemented — a wide range of different smartphone apps purporting to be for public health COVID-19 “contact tracing” purposes. 

The landscape of these is changing literally hour by hour, but I want to emphasize MOST STRONGLY that all of these apps are not created equal, and that I urge you not to install various of these unless you are required to by law — which can indeed be the case in countries such as China and Poland, just to name two examples.

Without getting into deep technical details here, there are basically two kinds of these contact tracing apps. The first is apps that send your location or other contact-related data to centralized servers (whether the data being sent is claimed to be “anonymous” or not). Regardless of promised data security and professed limitations on government access to and use of such data, I do not recommend voluntarily choosing to install and/or use these apps under any circumstances.

The other category of contact tracing apps uses local phone storage and never sends your data to centralized servers. This is by far the safer category in which resides the recently announced Apple-Google Bluetooth contact tracing API, being adopted in some countries (including now in Germany, which just announced that due to privacy concerns it has changed course from its original plan of using centralized servers). In general, installing and using these local storage contact tracing apps presents a vastly less problematic and far safer situation compared with centralized server contact tracing apps.

Even if you personally have 100% faith that your own government will “do no wrong” with centralized server contact tracing apps — either now or in the future under different leadership — keep in mind that many other persons in your country may not be as naive as you are, and will likely refuse to install and/or use centralized server contact tracing apps unless forced to do so by authorities.

Very large-scale acceptance and use of any contact tracing apps are necessary for them to be effective for genuine pandemic-related public health purposes. If enough people won’t use them, they are essentially worthless for their purported purposes.

As I have previously noted, various governments around the world are salivating at the prospect of making mass surveillance via smartphones part of the so-called “new normal” — with genuine public health considerations as secondary goals at best.

We must all work together to bring the COVID-19 disaster to an end. But we must not permit this tragic situation to hand carte blanche permissions to governments to create and sustain ongoing privacy nightmares in the process. 

Stay well, all.

–Lauren–

Coronavirus Reactions Creating Major Internet Security Risks

As vast numbers of people are suddenly working from home in reaction to the coronavirus pandemic, doctors switch to heavy use of video office visits, and in general more critical information than ever is suddenly being thrust onto the Internet, the risks of major security and privacy disasters that will long outlast the pandemic are rising rapidly. 

For example, the U.S. federal government is suspending key aspects of medical privacy laws to permit use of “telemedicine” via commercial services that have never been certified to be in compliance with the strict security and privacy rules associated with HIPAA (Health Insurance Portability and Accountability Act). The rush to provide more remote access to medical professionals is understandable, but we must also understand the risks of data breaches that once having occurred can never be reversed.

Sloppy computer security practices that have long been warned against are now coming home to roost, and the crooks as usual are way ahead of the game.  

The range of attack vectors is both broad and deep. Many firms have never prepared for large-scale work at home situations, and employees using their own PCs, laptops, phones, or other devices to access corporate networks can represent a major risk to company and customer data. 

Fake web sites purporting to provide coronavirus information and/or related products are popping up in large numbers around the Net, all with nefarious intents to spread malware, steal your accounts, or rob you in other ways.

Even when VPNs (Virtual Private Networks) are in use, malware on employee personal computers may happily transit VPNs into corporate networks. Commercial VPN services introduce their own risk factors, both due to potential flaws in their implementations and the basic technical limitations inherent in using a third-party service for such purposes. Whenever possible, third-party VPN services are to be avoided by corporate users, and these firms and other organizations using VPNs should deploy “in-house” VPN systems if they truly have the technical expertise to do so safely.

But far better than VPNs are “zero trust” security models such as Google’s “BeyondCorp” (https://cloud.google.com/beyondcorp), that can provide drastically better security without the disadvantages and risks of VPNs.

There are even more basic issues in focus. Most users still refuse to enable 2-factor (aka “2-step”) verification systems (https://www.google.com/landing/2step/) on services that support it, putting them at continuous risk of successful phishing attacks that can result in account hijacking and worse. 

I’ve been writing about all of this for many years here in this blog and in other venues. I’m not going to make a list here of my many relevant posts over time — they’re easy enough to find. 

The bottom line is that the kind of complacency that has been the hallmark of most firms and most users when it comes to computer security is even less acceptable now than ever before. It’s time to grow up, bite the bullet, and expend the effort — which in some cases isn’t a great deal of work at all! — to secure your systems, your data, and yes, your life and the lives of those that you care about.

Stay well.

–Lauren–

Iowa Screams: Don’t Trust High-Tech Elections!

For years — actually for decades — those of us in the Computer Science community who study election systems have with almost total unanimity warned against the rise of electronic voting, Internet voting, and more recently smartphone/app-based voting systems. I and my colleagues have written and spoken on this topic many times. Has anyone really been listening? Apparently very few!

We have pointed out repeatedly the fundamental problems that render high-tech election systems untrustworthy — much as “backdoors” to strong encryption systems are flawed at foundational levels.

Without a rigorous “paper trail” to backup electronic votes, knowing for sure when an election has been hacked is technically impossible. Even with a paper trail, getting authorities to use it can be enormously challenging. Hacking contests against proposed e-voting systems are generally of little value, since the most dangerous attackers won’t participate in those — they’ll wait for the real elections to do their undetectable damage!

Of course it doesn’t help when the underlying voting models are just this side of insane. Iowa’s caucuses have become a confused mess on every level. Caucuses throughout the U.S. should have been abandoned years ago. They disenfranchise large segments of the voting population who don’t have the ability to spend so much time engaged in a process that can take hours rather than a few minutes to cast their votes. Not only should the Democratic party have eliminated caucuses, it should no longer permit tiny states whose demographics are wholly unrepresentative of the party — and of the country as a whole — to be so early in the primary process. 

In the case of Iowa (and it would have been Nevada too, but they’ve reportedly abandoned plans to use the same flawed app) individual voters weren’t using their smartphones to vote, but caucus locations — almost 1700 of them in Iowa — were supposed to use the app (that melted down) to report their results. And of course the voice phone call system that was designated to be the reporting backup — the way these reports had traditionally been made — collapsed under the strain when the app-based system failed.

Some areas in the U.S. are already experimenting with letting larger and larger numbers of individual voters use their smartphones and apps to vote. It seems so obvious. So simple. They just can’t resist. And they’re driving their elections at 100 miles an hour right toward a massive brick wall.

Imagine — just imagine! — what the reactions would be during a national election if problems like Iowa’s occurred then on a much larger scale, especially given today’s toxic conspiracy theories environment. 

It would be a nuclear dumpster fire of unimaginable proportions. The election results would be tied up in courts for days, weeks, months — who knows?

We can’t take that kind of risk. Or if we do, we’re idiots and deserve the disaster that is likely to result.

Make your choice.

–Lauren–

How Some Software Designers Don’t Seem to Care About the Elderly

One of the most poignant ironies of the Internet is that at the very time that it’s become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers. The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target.

Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I’m particularly sensitive to the difficulties that they face keeping their Net lifelines going. 

Often they’re working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official “end of life” for updates a few days ago.

Yet these elderly users are increasing dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies.

This is a difficult state of affairs, to say the least.

There’s an aspect of this that is even worse. It’s attitudes! It’s the attitudes of many software designers that suggest they apparently really don’t care about this class of users much — or at all.

They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function. 

We can certainly stipulate that using old browsers and old operating systems is dangerous. In a perfect world, resources would be available to get everyone out of this situation.

However, we don’t exist in a perfect world, and these users, who are already often so disadvantaged in so many other ways, need support from software designers, not disdain or benign neglect.

A current example of these users being left behind is the otherwise excellent, open source “Discourse” forum software. I use this software myself, and it’s a wonderful project.

Recently they announced that they would be pulling all support for Internet Explorer (except for limited read-only access) from the Discourse software. Certainly they are not the only site or project dropping support for old browsers, but this fact does not eliminate the dilemma.

I despise Internet Explorer. And yes, old computers running old OS versions and old browsers represent security risks to their users. Definitely. No question about it. Yet what of the users who don’t understand how to upgrade? Who don’t have anyone to help them upgrade? Are we to tell them that they matter not at all? Is the plan to try ignore them as much as possible until they’re all dead and gone? Newsflash: This category of users will always exist!

This issue rose to the top of my morning queue today when I saw a tweet from Jeff Atwood (@codinghorror). Jeff is the force behind the creation and evolution of Discourse, and was a co-founder of Stack Exchange. He does seriously good work.

Yet this morning we engaged in the following tweet thread:

Jeff: At this point I am literally counting the days until we can fully remove IE11 support in @discourse (June 1st 2020)

Lauren: I remain concerned about the impact this will have on already marginalized users on old systems without the skills or help to switch to other browsers. They have enough problems already!

Jeff: Their systems are so old they become extremely vulnerable to hackers and exploits, which is bad for their health and the public health of everyone else near them. It becomes an anti-vaccination argument, in which nobody wins.

Lauren: Do you regularly work with extremely elderly people whose only lifelines are their old computers? Serious question.

Somewhere around this point, he closed down the dialogue by blocking me on Twitter.

This was indeed his choice, but seems a bit sad when I actually had more fruitful discussions of this matter previously on the main Discourse discussion forum itself.

Of course his anti-vaxx comparison is inherently flawed. There are a variety of programs to help people — who can’t otherwise afford important vaccinations — to receive them. By comparison, vast numbers of elderly persons (often living in isolation) are on their own when dealing with their computers.

The world will keep spinning after Discourse drops IE support.

Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don’t really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world. 

And that’s an unnecessary tragedy.

–Lauren–

The Right’s (and Left’s) Insane Internet Content Power Grab

Rumors are circulating widely — and some news sources claim to have seen actual drafts — of a possible Trump administration executive order aimed at giving the government control over content at large social media and other major Internet platforms. 

This effort is based on one of the biggest lies of our age — the continuing claims mostly from the conservative right (but also from some elements of the liberal left) that these firms are using politically biased decisions to determine which content is inappropriate for their platforms. That lie is largely based on the false premise that it’s impossible for employees of these firms to separate their personal political beliefs from content management decisions.

In fact, there is no evidence of political bias in these decisions at these firms. It is completely appropriate for these firms to remove hate speech and related attacks from their platforms — most of which does come from the right (though not exclusively so). Nazis, KKK, and a whole array of racist, antisemitic, anti-Muslim, misogynistic, and other violent hate groups are disproportionately creatures of the political right wing. 

So it is understandable that hate speech and related content takedowns would largely affect the right — because they’re the primary source of these postings and associated materials. 

At the scales that these firms operate, no decision-making ecosystem can be 100% accurate, and so errors will occur. But that does not change the underlying reality that the “political bias” arguments are false. 

The rumored draft Trump executive order would apparently give the FCC and FTC powers to determine if these firms were engaging in “inappropriate censorship” — the primary implied threat appears to be future changes to Section 230 of the Communications Decency Act, which broadly protects these (and other) firms and individuals from liability for materials that other parties post to their sites. In fact, 230 is effectively what makes social media possible in the first place, since without it the liability risks of allowing users to post anything publicly would almost certainly be overwhelming. 

But wait, it gets worse!

At the same time that these political forces are making the false claims that content is taken down inappropriately from these sites for political purposes, governments and politicians are also demanding — especially in the wake of recent mass shootings — that these firms immediately take down an array of violent postings and similar content. The reality that (for example) such materials may be posted only minutes before shootings occur, and may be widely re-uploaded by other users in an array of formats after the fact, doesn’t faze the politicians and others making these demands, who apparently either don’t understand the enormous scale on which these firms operate, or simply don’t care about such truths when they get in the way of politicians’ political pandering.

The upshot of all this is an insane situation — demands that offending material be taken down almost instantly, but also demands that no material be taken down inappropriately. Even with the best of AI algorithms and a vast human monitoring workforce, these dual demands are in fundamental conflict. Individually, neither are practical. Taken together, they are utterly impossible.

Of course, we know what’s actually going on. Many politicians on both the right and left are desperate to micromanage the Net, to control it for their own political and personal purposes. For them, it’s not actually about protecting users, it’s mostly about protecting themselves. 

Here in the U.S., the First Amendment guarantees that any efforts like Trump’s will trigger an orgy of court battles. For Trump himself, this probably doesn’t matter too much — he likely doesn’t really care how these battles turn out, so long as he’s managed to score points with his base along the way. 

But the broader risks of such strategies attacking the Internet are enormously dangerous, and Republicans who might smile today about such efforts would do well to imagine similar powers in the hands of a future Democratic administration. 

Such governmental powers over Internet content are far too dangerous to be permitted to the administrations of any party. They are anathema to the very principles that make the Internet great. They must not be permitted to take root under any circumstances.

–Lauren–

Another Breach: What Capital One Could Have Learned from Google’s “BeyondCorp”

Another day, another massive data breach. This time some 100 million people in the U.S., and more millions in Canada. Reportedly the criminal hacker gained access to data stored on Amazon’s AWS systems. The fault was apparently not with AWS, but with a misconfigured firewall associated with Capital One, the bank whose credit card customers and card applicants were the victims of this attack.

Firewalls can be notoriously and fiendishly difficult to configure correctly, and often present a target-rich environment for successful attacks. The thing is, firewall vulnerabilities are not headline news — they’re an old story, and better solutions to providing network security already exist.

In particular, Google’s “BeyondCorp” approach (https://cloud.google.com/beyondcorp) is something that every enterprise involved in computing should make itself familiar with. Right now!

BeyondCorp techniques are how Google protects its own internal networks and systems from attack, with enormous success. In a nutshell, BeyondCorp is a set of practices that effectively puts “zero trust” in the networks themselves, moving access control and other authentication elements to individual devices and users. This eliminates traditional firewalls (and in nearly all instances, VPNs) because there is no longer any need for such devices or systems that, once breached, give an attacker access to internal goodies.

If Capital One had been following BeyondCorp principles, there’d likely be 100+ million fewer potentially panicky people today.

–Lauren–