My Mock-Up for Labeling Fake News on Google Search

Here is my mock-up of one way to label fake news on Google Search Results Pages, in the style of the Google malware site warnings. The warning label link would go to a help page explaining the methodology of the labeling.

 

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Biting the Bullet: It’s Time to Require 2-Factor Verified Logins

For years now, security and privacy professionals — myself included — have been urging the use of 2-factor authentication (aka 2sv, 2-step authentication, 2fa, multiple factor, etc.) systems for logging into Web and other computer-based portals. Regardless of the name, these authentication systems all leverage the same basic principle — to gain access requires “something you know” and “something you have” — broadly defined. (And by the way, the inane and insecure concept of “security questions” doesn’t satisfy the latter category!)

The fundamental point is that these systems require the provision of additional information beyond the traditional username and password pair that have long demonstrated their frail natures as used by most persons.

Even if you don’t engage in notably bad password practices like sharing them among sites or laughingly weak password choices, usernames and passwords alone are incredibly vulnerable to basic phishing attacks that attempt to convince you to enter these credentials into (often very convincing) faked login pages. 

The lack of widespread adoption of 2-factor systems has been the gift that keeps on giving to crooks, scam artists, Russian dictators, and a long list of other lowlife scum. The result has been what seems like almost daily reports of system penetrations and data thefts.

Are 2-factor systems foolproof? No. There are a wide range of technologies and methodologies that can be used to implement these systems, and they vary significantly in theoretical and practical security effectiveness. But despite some critics, they all share one thing in common — they’re all much better than just a bare username and password alone!

Choices for 2-factor systems include text messages, automated voice calls, standalone authentication apps and devices, USB/NFC (e.g. FIDO U2F) crypto keys, and even printable key codes. And more.

With all of these choices, why is there so comparatively little uptake of 2-factor systems in the consumer sphere (in the corporate sphere there has been more, but not nearly enough there either).

Why don’t most users take advantage of 2-factor systems? There are two primary, interrelated reasons.

First is the psychology of the problem. Most people just don’t believe in their gut that a breach is going to happen to them — they feel it’s always going to be someone else. They just don’t want to “hassle” with anything additional to protect themselves, no matter how frequently we urge the use of 2-factor.

It’s much the same kind of “it won’t be me” reasoning that leads most people to not appropriately backup the data on their home (or often their office) systems.

Of course, once their account is breached or their disk crashes, they suddenly care very deeply about these issues, and people like me get those 3 AM calls where we have to bite our tongues to avoid saying “Well, I told you so.”

However, it would be unfair to blame the users entirely in this context, because — truth be told — many 2-factor implementations suck (that’s a computer science technical term, by the way) and are indeed a genuine hassle to use.

Some require the use of text messages (not everyone has a text message capable phone, as the Social Security Administration learned in their incompetent recent aborted attempt to require 2-factor authentication). Some require that you receive a new authentication token every time you login (overkill for most ordinary consumers) — rather than remembering that a given device has already been authenticated for a span of time. Some are slow. Some are buggy. Some screw up and lock users out of their accounts.

The bottom line is that a lousy 2-factor system is going to drive users batty.

But that’s not an excuse, because it is possible to do 2-factor in a correct and user-friendly manner, with appropriate choices for consumer and business/organization requirements.

By far the best 2-factor implementation I know of is Google’s. Their world class privacy/security teams have for years now been deploying 2-factor with the full range of choices and options I noted above. This is the way it should be done.

Yet even Google has to deal with the “it won’t happen to me” mindset syndrome on the part of users.

This is why I am now convinced that at least the major Web firms must begin moving gradually toward the mandatory use of 2-factor methods for users accessing these sites.

Just as responsible websites won’t permit a user to create an account without a password, and many attempt to prevent users from selecting incredibly weak passwords, we must start the process of requiring 2-factor use on a routine basis, both for the protection of users and of the companies that are serving them — and for the protection of society in a broader sense as well. We can no longer permit this to be simply an optional offering that vast numbers of users ignore.

This will indeed be a painful bullet to bite in some important respects. Doing 2-factor properly isn’t cheap, but it isn’t rocket science either. High quality commercial, proprietary, and open source solutions all exist. User education will be critical. There will be some user backlash to be sure. Poor quality 2-factor systems will need to be upgraded on a priority basis before the process of requiring 2-factor use can even begin.

It’s significant work, but if we care about our users (and stockholders!) we can no longer keep kicking this can down the road. 

The sorry state of most user authentication systems that don’t employ 2-factor has been a bonanza for all manner of crooks and hackers, both for the ones “only” seeking financial gain and for the ones seeking to undermine democratic processes. 

The deployment and required use of quality 2-factor systems won’t completely seal the door against these evil forces, but will definitely make their tasks significantly more difficult. 

We can no longer accept anything less.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Fake News and Google: What Does a Top Google Search Result Really Mean?

Controversy continues to rage over how Holocaust denial sites and related YouTube videos have achieved multiple top and highly-ranked search positions on Google for various forms and permutations of the question “Did the Holocaust really happen?” — and what — if anything — Google intends to ultimately do about these outright, racist lies achieving such search results prominence.

If you’re like most Internet users, you’ve been searching on Google and viewing the resulting pages of blue links for many years now.

But here’s something to ponder that you may not have ever really stopped to think about in depth: What does a top or otherwise high search result on Google really mean?

This turns out to be a remarkably complex issue.

The ranking of search results is arguably the most crucial aspect of core search functionalities. I don’t know the details of how Google’s algorithms make those determinations, and if I did know I couldn’t tell you — this is getting into “crown jewel” territory. This is one of Google’s most important and best kept secrets.

It’s not just important from business and competitive aspects, but also in terms of serving users well.

Google is continually bombarded by folks trying to use all manner of “dirty tricks” to try boost their search ranks and visibility — in the parlance of the trade, Black Hat SEO (Search Engine Optimization). Not all SEO per se is evil — simply having a well organized site using modern coding practices is essentially a kind of  perfectly acceptable and recommended “White Hat” SEO.

But if details of Google’s ranking algorithms were known, it could theoretically help underhanded players use various technical tricks to try “game” the system to achieve fraudulently high search ranks.

It’s crucial not to confuse search links that are the results of these Google algorithms — technically termed “organic” or “natural” search results — with paid ad links that may appear above those organic results. Google always clearly marks the latter as “Ad” or “Sponsored” and these must always be considered in the context of being paid insertions that are dependent on the advertisers’ continuing ability to pay for them.

Until a relatively few years ago, Google’s organic search results always represented “simply” what Google felt were the “best” or “most relevant” link results for a given user’s query.

But the whole situation became enormously more complex when Google began offering what it deemed to be actual answers to questions posed in some queries, rather than only the familiar set of links.

In simple terms, such answers are typically displayed above (and/or to the right) of the usual search result links. These can come from a wide variety of sources, often related to the top organic search result, with one prominent source being Wikipedia.

Google’s philosophy about this — repeatedly stated publicly — is that if a user is asking a straightforward question and Google knows the straightforward answer, it can make sense to provide that answer directly rather than only the pages of blue links.

This makes an enormous amount of good sense.

Yet it also introduced a massive complication which is at the foundation of the Holocaust denial and other fake news, fake information controversies.

Google Search has earned enormous trust around the world. Users assume that when Google ranks organic results to a query, it does so based on a sound, scientific analysis.

And here’s the absolutely crucial point: It is my belief, based on continuing interactions with Google users and other data I’ve been collecting over an extended period, that most Google users do not commonly differentiate between what Google considers to be “answers” and what Google considers “merely” to be ordinary search result links.

That is, users overall have come to trust Google to such an extent that they assume Google would not respond to a specific question with highly ranked links that are outright lies and falsifications.

Again, Google doesn’t consider all of those to be “specific answers” — Google rather considers the vast majority to be simply the “best” or “most relevant” links based on the internal churning of their algorithm.

Most Google users don’t make this distinction. To them, the highest ranking organic links that appear in response to questions are assumed to likely be the “correct” answers, since they can’t imagine Google knowingly highly ranking fake news or false information in response to such queries.

As Strother Martin’s character “Captain” famously proclaimed in the 1967 film “Cool Hand Luke” – “What we’ve got here is failure to communicate.”

Part of the problem is that Google’s algorithms appear outwardly to be tuned toward topics where specific answers are not controversial. It’s one thing to see a range of user-perceived answers to a question like “What is the best flavor of ice cream?” But when it comes to the truth of the Holocaust for example, there is no room for maneuvering, any more than there is when answering other fact-based questions, such as “Is the moon made of green cheese?”

Many observers are calling for Google to manually eliminate or manually downrank outright lies like the Holocaust denials.

I am unenthusiastic about such approaches. I would much prefer that scalable, automated methods be employed in these contexts whenever possible. Some governments are already proposing false “solutions” that amount to horrific new censorship regimes (that could easily make the existing and terrible EU “Right To Be Forgotten” look like a veritable picnic by comparison).

I would much prefer to see this set of issues resolved via various forms of labeling to indicate highly ranked items that are definitively false (please see: Action Items: What Google, Facebook, and Others Should Be Doing RIGHT NOW About Fake News).

Also important could be explicit notices from Google indicating that they are not endorsing such links in any way and do not represent them as being “correct answers” to the associated queries. A general educational outreach by Google to help users better understand Google’s view of what highly ranked search results actually represent, could also potentially be very useful.

As emotionally upsetting as the fake news and fake information situation has become, especially given the prominent rise of violent, racist, often politically motivated lies in this context, there are definitely ways forward out of this current set of dilemmas, so long as both we and the firms involved acknowledge that serious actions are needed and that the status quo is definitely no longer acceptable.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Administrivia: Observing Google: “Tough Love”

Lately I’ve been receiving a significant spike in email from readers asking various forms of the question:

What is your true stance regarding Google?

In particular, they seem unable to grasp how I can send out one blog post or other item that is significantly critical of some aspect of Google, then another post that is highly complimentary of a different aspect.

I view the question as frankly rather shallow and illogical. One might as well ask “What is your true opinion of life?”

Google is a great firm — a very large company of enormous complexity, operating at the leading edge of technology’s intersection with privacy, security, and one way or another, most other aspects of society.

It would be foolhardy in the extreme to evaluate Google as if it were some sort of monolithic whole (though the true “Google Haters” seem to do exactly that most of the time).

As for myself, when I believe that Google is making a mistake that is causing them to fall short of the high standards of which I feel they’re capable, I explicitly tell them so and I pull no punches in that analysis. When my view is that they’re doing great work (which is overwhelmingly more often the case) it’s my pleasure to say so clearly and explicitly.

If you wish to call this something akin to “tough love” regarding Google on my part, I won’t argue.

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Action Items: What Google, Facebook, and Others Should Be Doing RIGHT NOW About Fake News

Today is action items day, and there isn’t a moment to lose before someone gets killed as a result of the fake news scourge. It nearly happened a couple of days ago, when some wacko invaded a pizza restaurant and shot it up looking for the youthful “sex slaves” that the fake “Pizzagate” story claims exist (a total fabrication created out of whole cloth and part of the complex of fake anti-Hillary sex stories even being promoted by highly-placed wackos in Trump’s White House circle). In fact, there are already new fake stories circulating regarding the shooting itself.

There are some ongoing efforts to begin dealing with fake and false news at the big firms. Facebook appears to be running an experiment asking some users to rate how “misleading” some link titles might be. This will no doubt collect some interesting data and may be a small portion of solutions, but of course cannot alone solve the underlying problems.

Having spent enough time inside Google to have some sense of how the world looks at Google Scale (i.e. “Big” with a Capital “B”), I am convinced that efforts to deal with the Fake/False News problem must primarily be based on algorithmic, automated systems. Humans will also still have important roles to play in this process in terms of tagging, flagging, and verification at least — especially for items that are suspected or verified fakes but are still trending upward very rapidly.

So, Action Item #1: We should be looking at automated systems for doing the bulk of the first level work to detect fakes, or else we’ll be swamped from the word go.

And I believe that the foundational resources to get this done do exist. Google and Facebook (just to name two obvious examples) have powerful AI architectures that could be leveraged toward such tasks, given the will to do so.

Action Item #2: We must understand the true dynamics of how fake and false news are shared — how they rapidly reach large numbers of users and push high into search results. It’s popular to simply assert that everyone believing/sharing these fake stories are just evil or stupid (or both).

That’s way too simplistic an assertion. Even over the very short time that my factsquad.com fake news data collection effort has been active, obvious patterns in the data are already emerging.

One pattern that hits you in the face immediately is that the vast majority of users who share fake news are not stupid and not evil, but they are very much confused by the misinformation surrounding them. There’s a sense that “Well, if it looks professional, or if this ranks highly in search, or if Facebook showed it to me, or my friends shared it with me, it at least might be true, there might something to it somehow, so I’ll share it too!”

This appears to be a far, far larger group of users than the ones who are actually generating and voluntarily wallowing in this trash. In fact, the latter group is voluntarily in their own “echo chambers” — and like with most any group of dedicated haters, Internet-based efforts to change their minds will likely be wasted.

But for much a larger segment of users who are misinformed, confused, and don’t even realize that they have become involuntarily trapped in echo chambers by fake and false news, there is definitely still hope.

This emphasizes a key point that various observers including myself have previously noted. Older users and other users with less Internet experience tend to believe items that look professional, that appear to be from sources that are visually attractive and seemingly structured in a more “news traditional” manner. On the other hand, younger users or other users with more Internet experience tend to care much less — or not at all — about the “professionalism” of the source and give much more credence to items that rank highly in search, are surfaced by services like Facebook, or are widely shared by their friends.

And this gets us to the crux of the matter. By and large, the Internet economy has evolved into a click-based popularity contest. Both in terms of search and social media, it is basically designed to surface content based on how many people appear to have interest in that content. That’s somewhat a simplification of course but it’s fairly close to the mark. And let’s face it, given two stories presented as accurate — one that discusses how people eat pizza, the other an actually fake story describing a nonexistent child sex ring — which is likely to get the most clicks — and so the most revenue?

While a variety of the big fake news sites are related to persons with political motives, a large number are operated by individuals who have no political motives at all — they are “merely” enriching themselves by creating false stories that they believe will get the most shares and “engagement” clicks for their own monetary enrichment.

On the other hand, I’ll tell you as one of the individuals involved in Internet development for decades that we did not build and grow the Net to be a tool for paying people to post fake news, nor to use such false content to help elect a lying sociopath as President of the United States.

Yet the click-based Internet economy is what it is, and alternative models such as subscriptions have seen only limited success. Other concepts such as micropayments even less so.

So what are we to do? This brings us to …

Action Item #3: I continue to strongly feel that censorship is not the best answer to this set of problems, and that more information — not less — is the path toward solutions. Downranking — where fake stories would still exist but no longer be so prominently featured in search results or system shares — can be a viable approach if handled with caution. In particular, only the most serious and dangerous fake content would typically be considered for manual downranking. For most fake news situations, organic (natural) downranking is a much more desirable procedure.

And that’s where labeling comes in. If fake news that has managed to reach high search results and massive sharing were labeled as fake or in some other relevant distinctive manner, I believe that this would give some pause to that large group of confused users, result in less sharing of fakes, and ultimately in the organic downranking of many such stories.

What’s more, in comments I’ve received it’s clear that many users are desperate for help in evaluating the truth of the content that comes pouring in at them now. How can we really blame them for accepting false stories as real when we don’t even make the effort to point out and label the fakes that we definitely know about?

Obviously it’s the case that detecting, evaluating, and labeling content on an Internet scale — even if we restrict our efforts to highly trending and highly ranked items —  is a very significant undertaking, even with the best of AI resources doing the bulk of the work. Such issues as the exact wording of labels can also be complex. Do we actually want to label a known false story as “false” per se? Snopes does this successfully at their relatively limited scale, but they don’t have particularly deep pockets, either (ironically but predictably, all manner of fake news stories are written and widely promulgated against Snopes). Another approach as an alternative to a specific “false” label would be the assigning of a kind of “confidence rank” to such stories — with the known fakes perhaps getting a rank of zero.

As always, the devil is in the details, but I’m convinced that some combination of these or related concepts can be made to work, especially given that the status quo is no longer tenable.

Action Item #4: Parody as a test case. The ability of many (most?) people to recognize parody or satire on the Net (unless it is clearly labeled) can be very poor. I ran into this myself when I wrote April Fools’ columns for the CACM journal — even with that highly technical audience some readers assumed that what I thought was obvious and outrageous satire was actually real. The same thing happened with a satire video I released on YouTube years ago as well.

A significant number of the “fake news” stories are sourced from satire sites (that is, at least ostensibly satirical sites — many seem to call themselves satire in small print to try cover fake items with clearly political motives, or mix fake and real items on their sites to cause even more confusion). Yet even items from known satire sources like “The Onion” — and “Borowitz” from “The New Yorker” — frequently explode into mass visibility without any indication that they aren’t “legit” articles.
 
In some cases this is just by virtue of the fact that typical sharing or search results may give no obvious indication that these are satire or parody — and such items may be innocently shared to large numbers of persons as if they were serious items. In other cases, the sharer knows that they’re dealing with satire but purposely promotes the items as non-satire if this fits with their political agenda of the moment.
 
In either case, if such stories were clearly marked (as parody or satire, referencing the original source) in search results or in Facebook shares, Twitter feeds, etc., the purposeful and/or accidental damage they can do when they’re inappropriately interpreted by users as serious items could be significantly reduced.
 

Such specific labeling of individual items that are known to be originally sourced from self-proclaimed satire/parody sites — irrespective of their current share or search results links — could provide something of an initial proving ground for the overall labeling concept. If such items could be identified in the various search and sharing systems as having such sites as their origins, it could help to demonstrate the usefulness of this labeling technique on this specific class of material that would be relatively straightforward to target. User reactions to these labels could then be studied toward the launch of a possible much broader labeling initiative dealing with fake/false news in a more comprehensive manner.

None of this will be easy, nor are these the only possible approaches. But we must immediately begin vigorously moving down the paths towards practical solutions to the serious, rapidly escalating issues of fake news and related problems on the Internet, unless we’re satisfied to be increasingly suffocated under a growing and ultimately disastrous deluge of lies.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Study: Collecting URLs and Other Data About Fake/False News on the Net

Greetings. I have initiated a study to explore the extent of fake/false news on the Internet. Please use the form at:

https://factsquad.com

to report fake or false news found on traditional websites and/or in social media postings.

Any information submitted via this form may be made public after verification, with the exception of your name and/or email address if provided (which will be kept private and will not be used for any purposes other than this study).

URLs anywhere in the world may be reported, but please only report URLs whose contents are in English for now. Please only report URLs that are public and can be accessed without a login being required.

Thank you for participating in this study to better understand the nature and scope of fake/false news on the Net.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Google Home Drops Insightful “Donald Trump Is Definitely Crazy” Search Answer

Two days ago, I uploaded the YouTube video linked below, which recorded the insightful response I received from Google Home to the highly relevant question: “Is Donald Trump Insane?” I noted Google’s accurate appraisal on Google+ and in my various public mailing lists. The next day (yesterday) the response was (and currently is) gone for the same query to Home — replaced by the generic: “I can do a search for that.”

Interestingly, this seems to have only occurred for responses from Google Home itself. The original (text-based) answer is currently still appearing for the same query made by keyboard or voice to Google Search through conventional desktop or mobile means (however, at least for me the response is no longer being spoken out loud — and I had earlier reports that the answer response was spoken on all capable platforms).

Let’s face it — what helps to make the original answer so great is the pacing and inflections of the excellent Google Home synthetic voice! It’s just not the same reading it as text.

There would seem to be only two possibilities for what’s going on.

One possibility is that the normal churning of Google’s algorithms dropped that answer from Home (and replaced it with the generic response) solely through ordinary programmed processes.

Of course, the other possibility is that after I publicized this brilliant, wonderful, and fully accurate spoken response, it was manually excised from Home by someone at Google for reasons of their own, about which I will not speculate here and now.

Either way, the timing of this change, only hours after my release of the related video, is — shall we say — fascinating. 

https://www.youtube.com/watch?v=58R2kEL6E6Q

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

How Fake and False News Distort Google and Others

With all of the current discussions regarding the false and fake news glut on the Internet — often racist in nature, some purely domestic in origin, some now believed to be instigated by Putin’s Russia — it’s obvious that the status quo for dealing with such materials is increasingly untenable.

But what to do about all this?

As I have previously discussed, my general view is that more information — not less — is the best solution to these distortions that may have easily turned the 2016 election on its head.

Labeling, tagging, and downranking of clearly false or fake posts is an approach that can help to reduce the tendency for outright lies to be treated equivalently with truth in social media and search engines. These techniques also avoid invoking the actual removal of lying items themselves and the “censorship” issues that then may come into play (though private firms quite appropriately are indeed free to determine what materials they wish to permit and host — the First Amendment only applies to governmental restraints on speech in the USA).

How effective might such labeling be? Think about the labeling of “fake news” in the same sort of vein as the health warnings on cigarette packs. We haven’t banned cigarettes. Some people ignore the health warnings, and many people still smoke in the USA. But the number of people smoking has dropped dramatically, and studies show that those health warnings have played a major role in that decrease.

Labeling fake and false news to indicate that status — and there’s a vast array of such materials where no reasonable arguments that they are not untrue can reasonably exist — could have a dramatic positive impact. Controversial? Yep. Difficult? Sure. But I believe that this can be approached gradually, starting with top trending stories and top search results.

A cure-all? No, just as cigarette health warnings haven’t been cure-alls. But many lives have still been saved. And the same applies to dealing with fake news and similar lies masquerading as truthful posts.

Naysayers suggest that it’s impossible to determine what’s true or isn’t true on the Internet, so any attempts to designate anything that’s posted as really true or false must fail. This is nonsense. And while I’ve previously noted some examples (Man landing on the moon, Obama born in Hawaii) it’s not hard to find all manner of politically-motivated lies that are also easy to ferret out as well.

For example, if you currently do a Google search (at least in the USA) for:

southern poverty law center

You will likely find an item on the first page of results (even before some of the SPLC’s own links) from online Alt-Right racist rag Breitbart — whose traditional overlord Steve Bannon has now been given a senior role in the upcoming Trump administration.

The link says:

FBI Dumps Southern Poverty Law Center as Hate Crimes Resource

Actually, this is a false story, dating back to 2014. It’s an item that was also picked up from Breitbart and republished by an array of other racist sites who hate the good work of the SPLC fighting both racism and hate speech.

Now, look elsewhere on that page of Google search results — then on the next few pages. No mention of the fact that the original story is false, that even the FBI itself issued a statement noting that they were still working with the SPLC on an unchanged basis.

Instead of anything to indicate that the original link is promoting a false story, what you’ll mostly find on succeeding pages is more anti-SPLC right-wing propaganda.

This situation isn’t strictly Google’s fault. I don’t know the innards of Google’s search ranking algorithms, but I think it’s a fair bet that “truth” is not a major signal in and of itself. More likely there’s an implicit assumption — which no longer appears to necessarily hold true — that truthful items will tend to rise to the top of search results via other signals that form inputs to the ranking mechanisms.

In this case, we know with absolute certainly that the original story on page one of those results is a continuing lie, and the FBI has confirmed this (in fact, anyone can look at the appropriate FBI pages themselves and categorically confirm this fact as well).

Truth matters. There is no equivalency between truth and lies, or otherwise false or faked information.

In my view, Google should be dedicated to the promulgation of widely accepted truths whenever possible. (Ironic side note: The horrible EU “Right To Be Forgotten” — RTBF — that has been imposed on Google, is itself specifically dedicated to actually hiding truths!)

As I’ve suggested, the promotion of truth over lies could be accomplished both by downranking of clearly false items, and/or by labeling such items as (for example) “DEEMED FALSE” — perhaps along with a link to a page that provides specific evidence supporting that label (in the SPLC example under discussion, the relevant page of the FBI site would be an obvious link candidate).

None of this is simple. The limitations, dynamics, logistics, and all other aspects of moving toward promoting truth over lies in social media and search results will be an enormous ongoing effort — but a critically crucial one.

The fake news, filter bubbles, echo chambers, and hate speech issues that are now drowning the Internet are of such a degree that we need to call a major summit of social media and search firms, experts, and other concerned parties on a multidisciplinary basis to begin hammering out practical industry-wide solutions. Associated working groups should be established forthwith.

If we don’t act soon, we will be utterly inundated by the false “realities” that are being created by evil players in our Internet ecosystems, who have become adept at leveraging our technology against us — and against truth.

There is definitely no time to waste.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Blocked by Lauren (“The Motion Picture”)

With nearly 400K Google+ followers, I’ve needed to block “a few” over the years to keep order in the comment sections of my threads. I’m frequently asked for that list — which of course is composed entirely of public G+ profile information. But as far as I know there is no practical way to export this data in textual form. However, when in doubt, make a video! By the way, I do consider unblocking requests, and frequently unblock previously blocked profiles as a result, depending on specific circumstances. Happy Thanksgiving!

https://www.youtube.com/watch?v=GX79fYTSjFE

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Facebook, Google, Twitter, and Others: Start Taking Hate Speech Seriously!

Recently, in Crushing the Internet Liars, I discussed issues relating to the proliferation of “fake news” on the Internet (via social media, search, and other means) and the relationship of personalization-based “filter bubbles” and “echo chambers” — among other effects.

A tightly related set of concerns, also rising to prominence during and after the 2016 election, are the even broader concepts of Internet-based hate speech and harassment. The emboldening of truly vile “Alt-Right” and other racist, antisemitic white supremacist groups and users in the wake of Trump’s election has greatly exacerbated these continuing offenses to ethics and decency (and in some cases, represent actual violations of law).

Lately, Twitter has been taking the brunt of public criticism regarding harassment and hate speech — and their newly announced measures to supposedly combat these problems seem to mostly be potentially counterproductive “ostrich head in the sand” tools that would permit offending tweets to continue largely unabated.

But all social media suffers from these problems to one degree or another, and I feel it is fair to say that no major social media firm really takes hate speech and harassment seriously — or at least as seriously as ethical firms must.

To be sure, all significant social media companies provide mechanisms for reporting abusive posts. Some systems pair these with algorithms that attempt to ferret out the worst offenders proactively (though hate users seem to quickly adapt to bypass these as rapidly as the algorithms evolve).

Yet one of the most frequent questions I receive regarding social media is “How do I report an abusive posting?” Another is “I reported that horrible posting days ago, but it’s still there, why?”

The answer to the first question is fairly apparent to most observers — most social media firms are not particularly interested in making their abuse reporting tools clear, obvious, and plainly visible to both technical and nontechnical users of all ages. Often you must know how to access posting submenus to even reach the reporting tools.

For example, if you don’t know what those three little vertical dots mean, or you don’t know to even mouse over a posting to make those dots appear — well, you’re out of luck (this is a subset of a broader range of user interface problems that I won’t delve into here today).

The second question — why aren’t obviously offending postings always removed when reported — really needs a more complex answer. But to put it simply, the large firms have significant problems dealing with abusive postings at the enormous scales of their overall systems, and the resources that they have been willing to put into the reporting and in some cases related human review mechanisms have been relatively limited — they’re just not profit center items.

They’re also worried about false abuse reports of course — either purposeful or accidental — and one excuse used for “hiding” the abuse reporting tools may be to try reduce those types of reports from users.

All that having been said, it’s clear that the status quo when it comes to dealing with hate speech or harassing speech on social media is no longer tenable.

And before anyone has a chance to say, “Lauren, you’re supposed to be a free speech advocate. How can you say this?”

Well, it’s true — I’m a big supporter of the First Amendment and its clauses regarding free speech.

But what is frequently misunderstood, is that this only applies to governmental actions against free speech — not to actions by individuals, private firms, or other organizations who are not governmental entities.

This is one reason why I’m so opposed to the EU’s horrific “Right To Be Forgotten” (RTBF) — it’s governments directly censoring the speech of third parties. It’s very wrong.

Private firms though most certainly do have the right to determine what sorts of speech they choose to tolerate or support on their platforms. That includes newspapers, magazines, conventional television networks, and social media firms, to name but a few.

And I assert that it isn’t just the right of these firms to stamp out hate speech and harassment on their platforms, but their ethical responsibility to do so as well.

Of course, if the Alt-Right or other hate groups (and certainly the right-wing wackos aren’t the only offenders) want to establish their own social media sites for that subset of hate speech that is not actually illegal — e.g. the “Trumpogram” service — they are free to do so. But that doesn’t mean that the Facebooks, Googles, and Twitters of the world need to permit these groups’ filth on their systems.

Abusive postings in terms of hate speech and harassing speech certainly predate the 2016 election cycle, but the election and its aftermath demonstrate that the major social media firms need to start taking this problem much more seriously — right now. And this means going far beyond rhetoric or public relations efforts. It means the implementation of serious tools and systems that will have real and dramatic impacts on helping to stamp out the postings of the hate and other abuse mongers in our midst today.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Unacceptable: How Google Undermines User Trust by Blocking Users from Their Own Data

UPDATE (November 18, 2016): After much public outcry, Google has now reversed the specific Pixel-related Google account bans noted in this post. Unfortunately, the overall Whose data is it? problem discussed in this post persists, and it’s long since time for Google to appropriately address this issue, which continues to undermine public user trust in a fine company.

– – –

There are times when Google is in the right. There are times when Google is in the wrong. By far, they’re usually on the angels’ side of most issues. But there’s one area where they’ve had consistent problems dating back for years: Cutting off users from those users’ own data when there’s a dispute regarding Google Account status.

A new example of this recurring problem — an issue about which I’ve heard from large numbers of Google users over time — has just surfaced. In this case, it involves the reselling of Google Pixel phones in a manner that apparently violates the Google Terms of Service, with the result that a couple of hundred users have reportedly been locked out of their Google accounts and all of their data, at least for now. 

This means that they’re cut off from everything they’ve entrusted to Google — mail, documents, photos, the works.

Here and now, I’m not going to delve into the specifics of this case — I don’t know enough of the details yet. The entire area of Google accounts suspension, closure, recovery, and so on is complex to say the least. Most times (but not always) users are indeed at fault — one way or another — when these kinds of events are triggered. And the difficulty of successfully appealing a Google account suspension or closure has become rather legendary.

Even recovering a Google account due to the loss of a password can be difficult if you haven’t taken proactive steps to aid in that process ahead of time —steps that I’ve previously discussed in detail.

But the problem of what happens to users’ data when they can’t access their accounts — for whatever reasons — is something that I’ve personally been arguing with Google about literally for years, without making much headway at all.

Google has excellent mechanisms for users to download their data while they still have account access. Google even has systems for you to specify someone else who would have access to your account in case of emergency (such as illness or accident), and policies for dealing with access to accounts in case of the death of an account holder.

The reality though, is that users have been taught to trust Google with ever more data that is critical to their lives, and most people don’t usually think about downloading that data proactively.

So when something goes wrong with their account, and they lose access to all of that data, it’s like getting hit with a ton of bricks.

Again, this is not to say that users aren’t often — in fact usually — in the wrong (at least in some respect) when it comes to account problems. 

But unless there is a serious — and I mean serious (like child porn, for example) criminal violation of law — my view is that in most circumstances users should have some means to download their data from their account even if it has been suspended or closed for good cause. 

If they can’t use Google services again afterwards, them’s the breaks. But it’s still their own data we’re talking about, not Google’s.

Google has been incredibly resistant to altering this aspect of their approach to user account problems. I am not ignorant of their general reasoning in this category of cases — but I strongly believe that they are wrong with their essentially one size fits all “death penalty” regime in this context.

Nobody is arguing that there aren’t some specific situations where blocking a violating user (or a user accused of violations) from accessing their data on Google services is indeed justified. But Google doesn’t seem to have any provisions for anything less than total data cutoff when there’s a user account access issue, even when significant relevant legal concerns are not involved.

This continuing attitude by Google does not engender user trust in Google’s stewardship of user data, even though most users will never run afoul of this problem.

These kinds of actions by Google provide ammunition to the Google Haters and are self-damaging to a great firm and the reputation of Googlers everywhere, some of whom have related to me their embarrassment at trying to explain such stories to their own friends and families.

Google must do better when it come to this category of user account issues. And I’ll keep arguing this point until I’m blue in the face and my typing fingertips are bruised. C’mon Google, please give me a break!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Crushing the Internet Liars

Frankly, this isn’t the post that I had originally intended. I had a nearly completed blog draft spinning away happily on a disk, a draft that presented a rather sedate, scholarly, and a bit introspective discussion of how Internet-based communications evolved to reach the crisis point we now see regarding misinformation, filter bubbles, and so-called echo chambers in search and social media.

I just trashed that draft. Bye!

Numerous times over the years I’ve tried the scholarly approach in various postings regarding the double-edged sword of Internet personalization systems — capable of bringing both significant benefits to users but also carrying significant and growing risks.

Well, given where we stand today after the 2016 presidential election, it appears that I might have just as well been doing almost anything else rather than bothering to write that stuff. Toenail trimming would have likely been a more fruitful use of my time.

So now — today — we must deal with this situation while various parties are hell-bent toward turning the Internet into a massive, lying propaganda machine to subvert not only the democratic process, but our very sense of reality itself.

Much of this can be blamed on the concept of “false equivalency” — which runs rampant on cable news, mainstream Internet news sites, and throughout social media such as Facebook (which is taking the brunt of criticism now — and rightly so), plus on other social media ecosystems.

Fundamentally, this idea holds that even if there is widespread agreement that a particular concept is fact, you are somehow required to give “equal time” to wacko opposing views.

This is why you see so much garbage prominently surfaced from nutcases like Alex Jones — who believes the U.S. government blew up the World Trade Center buildings — or Donald Trump and his insane birther attacks on Obama, that Trump used to jump-start his presidential campaign. It doesn’t take more than half a brain to know that such statements are hogwash.

To be sure, it’s difficult to really know whether such perpetually lying creatures actually believe what they’re saying — or are simply saying outrageous things as leverage for publicity. In the final analysis though, it doesn’t much matter what their motives really are, since the damage done publicly is largely equivalent either way.

The same can be said for the wide variety of fake postings and fake news sites that increasingly pollute the Net. Do they believe what they say, or are they simply churning out lies on a twisted “the end justifies the means” basis? Or are they just sick individuals promoting hate (often racism and antisemitism) and chaos? No doubt all of the above apply somewhere across the rapidly growing range of offenders, some of whom are domestic in nature, and some who are obviously operating under the orders of foreign leaders such as Russia’s Putin.

Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.

And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.

Facebook’s Mark Zuckerberg is publicly denying that Facebook has a significant role in the promotion of lies. He denies that Facebook’s algorithms for controlling which postings users see creates echo chambers where users only see what they already believe, causing lies and distortions to spread ever more widely without truth having a chance to invade those chambers. But Facebook’s own research tells a very different story, because Facebook insists that exactly those kinds of controlling effects occur to the benefit of Facebook’s advertisers.

Yet this certainly isn’t just a Facebook problem. It covers the gamut of social media and search.

And the status quo can no longer be tolerated.

So where do we go from here?

Censorship is not a solution, of course. Even the looniest of lies, assuming the associated postings are not actually violating laws, should not be banned from visibility.

But there is a world of difference between a lying post existing, vis-a-vis the actual widespread promotion of those lies by search and social media.

That is, simply because a lying post is being viewed by many users, there’s no excuse for firms’ algorithms to promote such a post to a featured or other highly visible status, creating a false equivalency of legitimacy by virtue of such lies being presented in close proximity to actual facts.

This problem becomes particularly insidious when combined with personalization filter bubbles, because the true facts are prevented from penetrating users’ hermetically sealed social media worlds that have filled with false postings.

And it gets worse. Mainstream media in a 24/7 news cycle is hungry for news, and all too often now, the lies that germinate in those filter bubbles are picked up by conventional media and mainstream news sites as if they were actual true facts. And given the twin realities of reduced budgets and beating other venues to the punch, such lies frequently are broadcast by such sites without any significant prior fact checking at all.

So little by little, our sense of what is actually real — the differences between truth and lies — becomes distorted and diluted.

Again, censorship is not the answer.

My view is that more information — not less information — is the path toward reducing the severity of these problems.

Outright lies must not continue to be given the same untarnished prominence as truth in search results and in widely seen social media postings.

There are multiple ways to achieve this result.

Lying sites in search results can be visibly and prominently tagged as such in those results, be downranked, or both. Similar principles can apply to widely shared social media posts that currently are featured and promoted by social media sites primarily by virtue of the number of persons already viewing them. Because — lets face it — people love to view trash. Lots of users viewing and sharing a post does not make it any less of a lie.

As always, the devil is in the details.

This will be an enormously complex undertaking, involving technology, policy, public relations, and the law. I won’t even begin to delve into the details of all this here, but I believe that with sufficient effort — effort that we must now put forth — this is a doable concept.

Already, whenever such concepts are brought up, you quickly tend to hear the refrain: “Who are you to say what’s a fact and what’s a lie?”

To which I reply: “To hell with false equivalences!”

Man landed on the moon. Obama was born in Hawaii. Terrorists destroyed the World Trade Center with jet aircraft. Hillary Clinton never said she was going to abolish the Second Amendment. Donald Trump did say that he supported the war in Iraq. Denzel Washington did not say that he supported Trump. On and on and on.

There are a virtually endless array of stated facts that reasonable people will agree are correct. And if the nutcases want to promote their own twisted views on these subjects that’s also fine — but those postings should be clearly labeled for what they are — not featured and promoted. As the saying goes, they’re free to have their own opinions, but not their own facts.

Obviously, this leaves an enormous range of genuinely disputed issues where the facts are not necessarily clear, often where only opinion and/or philosophy really apply. That’s fine too. They’re out of scope for these discussions or efforts.

But the outright Internet liars must be crushed. They shouldn’t be censored, but they must no longer be permitted to distort and contaminate reality by being treated on an equal footing with truth by major search and social media firms.

We built the Internet juggernaut. Now it’s our job to fix it where it’s broken.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

President Trump and the Nuclear Launch Codes

selection_573

selection_571

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Elections and the Internet “Echo Chambers”

Back in a 2010 blog post, I noted the kinds of “echo chamber” effects that can result from personalization and targeting of various types of information on the Web. That particular posting concentrated on search personalization, but also noted the impact on Internet-based discussions, a situation that has become dramatically more acute with the continuing rise of social media. Given the current controversies regarding how “filter bubbles” and other algorithmically-driven information surfacing and restriction systems may impact users’ views and potentially increase political and religious radicalization — particularly in relation to the 2016 elections here in the USA — I believe it is relevant to republish that posting today, which is included below. Also of potential interest is my recently reposted item related to Internet fact checking.

Search Personalization: Blessing and Trap?
(Original posting date: September 16, 2010)

Greetings. Arguably the holy grail of search technology — and of many other aspects of Internet-based services today, is personalization. Providing users with personalized search suggestions, search results, news items, or other personalized services as quickly as possible, while filtering out “undesired” information, is a key focus not only of Google but of other enterprises around the world.

But does too much reliance on personalization create an “echo chamber” effect, where individuals are mainly (or perhaps totally) exposed to information that only fits their predetermined views? And if so, is this necessarily always beneficial to those individuals? What about for society at large?

Diversity of opinions and information is extremely important, especially today in our globally interconnected environment. When I do interviews on mainstream radio programs about Internet issues, it’s usually on programs where the overall focus is much more conservative than my own personal attitudes. Yet I’ve found that even though there’s often a discordance between the preexisting views of most listeners and my own sentiments, I typically get more insightful questions during those shows than in the venues where I spend most of my time online.

And one of the most frequent questions I get afterwards from listeners contacting me by email is: “How come nobody explained this to me that way before?”

The answer usually is that personalized and other limited focus information sources (including some television news networks) never exposed those persons to other viewpoints that might have helped them fully understand the issues of interest.

An important aspect of search technology research should include additional concentration on finding ways to avoid potential negative impacts from personalized information sources — particularly when these have the collateral effect of “shutting out” viewpoints, concepts, and results that would be of benefit both to individuals and to society.

Overall, I believe that this is somewhat less of a concern with “direct” general topic searches per se, at least when viewed as distinct from search suggestions. But as suggestions and results become increasingly commingled, this aspect also becomes increasingly complex. (I’ve previously noted my initial concerns in this respect related to the newly deployed Google Instant system).

Suggestions would seem to be an area where “personalization funneling” (I may be coining a phrase with this one) would be of more concern. And in the world of news searches as opposed to general searches, there are particularly salient related issues to consider (thought experiment: if you get all of your information from FOX News, what important facts and contexts are you probably missing?)

While there are certainly many people who (for professional or personal reasons) make a point to find and cultivate varied and opposing opinions, not doing so becomes much easier — and seemingly more “natural” — in the Internet environment. At least the possibility of serendipitous exposure to conflicting points of view was always present when reading a general audience newspaper or magazine, for example. But you can configure many Web sites and feeds to eliminate all but the narrowest of opinions, and some personalization tools are specifically designed to enhance this effect.

As our search and related tools increasingly focus on predicting what we want to see and avoiding showing us anything else (which naturally enough makes sense if you want to encourage return visits and show the most “attractive” ads to any given individual), the funneling effect of crowding out other materials of potential value appears to be ever more pronounced.

Add to that the “preaching to the choir” effect in many Internet discussions. True, there are forums with vibrant exchanges of views and conflicting opinions. But note how much of our Twitter and Buzz feeds are depressingly dominated by a chorus of “Attaboy!” yells from “birds of a feather” like-minded participants.

I am increasingly concerned that technologically-based Internet personalization — despite its many extremely positive attributes — also carries with it the potential for significant risks that are apparently not currently receiving the research and policy attention that they deserve.

If we do choose to assign some serious thinking to this dilemma, we certainly have the technological means to adjust our race toward personalization in ways that would help to balance out the equation.

This definitely does not mean giving up the benefits of personalization. However, we can choose to devote some of the brainpower currently focused on figuring out what we want to see, and work also toward algorithms that can help determine what we need to see.

In the process, this may significantly encourage society’s broader goals of cooperation and consensus, which of necessity require — to some extent at least — that we don’t live our entire lives in confining information silos, ironically even while we’re surrounded by the Internet’s vast buffet of every possible point of view.

 – – –

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Home Will Change the World

Much has recently been written about Google Home, the little vase-like cylinder that started landing in consumers’ hands only a week or so ago. Home’s mandate sounds simple enough in theory — listen to a room for commands or queries, then respond by voice and/or with appropriate actions.

What hasn’t been much discussed however, is how the Home ecosystem is going to change for the better the lives of millions to billions of people over time, in ways that most of us couldn’t even imagine today. It will drastically improve the lives of vast numbers of persons with visual and/or motor impairments, but ultimately will dramatically and positively affect the lives of everyone else as well.

Home isn’t the first device to offer this technology segment — nor is it the least expensive — Amazon came earlier and has a more limited version that is cheaper than Home (and a model more expensive than Home as well).

But while Amazon’s device seems to have been designed with buying stuff on Amazon as its primary functionality, Google’s Home — backed by Google’s enormously more capable corpus of information, accurate speech recognition, and AI capabilities, stands to quickly evolve to far outpace Amazon’s offering along all vectors.

This is a truth even if we leave aside the six-month free subscription to Google’s excellent ad-free “YouTube Red/Google Play Music” — which Google included with my Home shipment here in the USA, knowing that once you’ve tasted the ability to play essentially any music and any YouTube videos at any time just by speaking to the air, you’ll have a difficult time living without it. I’ve had Home for a week and I’m finally listening to great music of all genres again — I know that I’ll be subscribing when my free term to that package runs out.

You can dig around a bit and easily find a multitude of reviews that discuss specifics of what Home does and how you use it, so I’m not going to spend time on that here, other than to note that like much advanced technology that is simple to operate, the devilishly complex hardware and software design aspects won’t be suspected or understood by most users — nor is there typically a need for them to do so.

But what I’d like to ponder here is why this kind of technology is so revolutionary and why it will change our world.

Throughout human history, pretty much any time you wanted information, you had to physically go to it in one way or another. Dig out the scroll. Locate the book. Sit down at the computer. Grab the smartphone.

The Google Home ecosystem is a sea change. It’s fundamentally different in a way that is much more of a giant leap than the incremental steps we usually experience with technology.

Because for the first time in most of our experiences, rather than having to go to the information, the information is all around us, in a remarkably ambient kind of way.

Whether you’re sitting at a desk at noon or in bed sleepless in the middle of the night, you have but to verbally express your query or command, and the answers, the results, are immediately rendered back to you. (Actually, you first speak the “hotword” — currently either “Hey Google” or “OK Google” — followed by your command or query. Home listens locally for the hotword and only sends your following utterance up to Google for analysis when the hotword triggers — which is also indicated by lights on the Home unit itself. There’s also a switch on the back of the device that will disable the microphone completely.)

It’s difficult to really express how different this is from every other technology-based information experience. In a matter of hours of usage, one quickly begins to think of Home as a kind of friendly ethereal entity at your command, utterly passive until invoked. It becomes very natural to use — the rapid speed of adaptation to using Home is perhaps not so remarkable when you consider that speech is the human animal’s primary evolved mode of communications. Speech works with other humans, to some extent with our pets and other animals — and it definitely works with Google Home.

Most of the kinds of commands and queries that you can give to Home can also be given to your smartphone running Google’s services — in fact they both basically access the same underlying “Google Assistant” systems.

But when (for example) information and music are available at any time, at the spur of the moment, for any need or whim — just by speaking wherever you happen to be in a room and no matter the time of day — it’s really an utterly different emotional effect.

And it’s an experience that can easily make one realize that the promised 21st century really has now arrived, even if we still don’t have the flying cars.

The sense of science fiction come to life is palpable.

The Google teams who created this tech have made no secret of the fact that the computers of “Star Trek” have been one of their key inspirations.

There are various even earlier scifi examples as well, such as the so-called “City Fathers” computers in James Blish’s “Cities in Flight” novels. 

It’s obvious how Google Home technology can assist the blind, persons with other visual impairments, and a wide variety of individuals with mobility restrictions.

Home’s utility in the face of simple aging (and let’s face it, we’re all either aging or dead) is also immense. As I noted back in As We Age, Smartphones Don’t Make Us Stupid — They’re Our Saviors, portable information aids can be of great value as we get older.

But Home’s “always available” nature takes this to an entirely new and higher level.

The time will come when new homes will be built with such systems designed directly into their walls, and when people may feel a bit naked in locations where such capabilities are not available. And in fact, in the future this may be the only way that we’ll be able to cope with the flood of new and often complex information that is becoming ever more present in our daily lives.

Perhaps most telling of all is the fact that these systems — as highly capable as they are right now — are only at the bare beginnings of their evolution, an evolution that will reshape the very nature of the relationship between mankind and access to information.

If you’re interested in learning more about all this, you’re invited to join my related Google+ Community which is covering a wide range of associated topics.

Indeed — we really are living in the 21st century!

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Google Search Results and Fact Checking

With so many discussions now raging regarding the impacts of misinformation on the Internet — including in relation to the 2016 election — I’m reposting below a blog item of mine from 17 June 2007 — “Extending Google Blacklists for Dispute Resolutions” — that may perhaps still be considered relevant today.

At that time, I was framing this overall issue in terms of disputed search results — I would later propose this kind of framework as a possible alternative to the horrific EU “Right To Be Forgotten” censorship concept.

We now would likely include most of these issues under the broader umbrella of “fact checking” concepts.

Extending Google Blacklists for Dispute Resolutions
(Original posting date: June 17, 2007)

Greetings. In a very recent blog item, I discussed some issues regarding search engine dispute resolution, and posed some questions about the possibility of “dispute links” being displayed with search results to indicate serious disputes regarding the accuracy of particular pages, especially in cases of court-determined defamation and the like.

While many people appear to support this concept in principle, the potential operational logistics are of significant concern. As I originally acknowledged, it’s a complex and tough area, but that doesn’t make it impossible to deal with successfully either.

Some others respondents have taken the view that search engines should never make “value judgments” about the content of sites, other than that done (which is substantial) for result ranking purposes.

What many folks may not realize is that in the case of Google at least, such more in-depth judgments are already being made, and it would not necessarily be a large leap to extend them toward addressing the dispute resolution issues I’ve been discussing.

Google already puts a special tag on sites in their results which Google believes contain damaging code (“malware”) that could disrupt user computers. Such sites are tagged with a notice that “This website may damage your computer.” — and the associated link is not made active (that is, you must enter it manually or copy/paste to access that site — you cannot just click).

Also, in conjunction with Google Toolbar and Firefox 2, Google collects user feedback about suspected phishing sites, and can display warnings to users when they are about to access potentially dangerous sites on these lists.

In both of these cases, Google is making a complex value judgment concerning the veracity of the sites and listings in question, so it appears that this horse has already left the barn — Google apparently does not assert that it is merely a neutral organizer of information in these respects.

So, a site can be tagged by Google as potentially dangerous because it contains suspected malware, or because it has been reported by the community to be an apparent phishing site. It seems reasonable then for a site that has been determined (by a court or other agreed-upon means) to be containing defaming or otherwise seriously disputed information, to also be potentially subject to similar tagging (e.g. with a “dispute link”).

Pages that contain significant, purposely false information, designed to ruin people’s reputations or cause other major harm, can be just as dangerous as phishing or malware sites. They may not be directly damaging to people’s computers, but they can certainly be damaging to people’s lives. And presumably we care about people at least as much as computers, right?

So I would assert that the jump to a Google “dispute links” mechanism is nowhere near as big a leap from existing search engine results as it may first appear to be.

In future discussion on this topic, I’ll get into more details of specific methodologies that could be applicable to the implementation of such a dispute handling system, based both within the traditional legal structure and through more of a “Web 2.0” community-based topology.

But I wanted to note now that while such a search engine dispute resolution environment could have dramatic positive effects, it is fundamentally an evolutionary concept, not so much a revolutionary one.

More later. Thanks as always.

 – – –

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Tops Trump’s Technology Enemies List

As something of a student of the great Chinese general Sun Tzu, who lived between around 544 BC and 496 BC, I have long agreed with one of the most famous statements attributed to him: 

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

With that truism in mind, I have throughout the last few months of the 2016 election season kept channels of communications open with persons directly knowledgeable of soon to be President Trump’s handlers views on technology policy.

We know that Trump himself is a dilapidated dunce bending in whatever direction the current breezes seem to blow from minute to minute. But the advisers holding his leash — who will ultimately set the policy objectives for this senile swine (no offense meant to actual hogs or pigs!), have very definite views that they intend to push into Trump’s orbit. For all practical purposes, we can expect these to fill the empty vessel of Trump’s skull and become essentially his own.

The laundry list of attacks that they have planned is long and diverse, and is essentially a war against all manner of science, technology, and anybody supporting scientific concepts that conflict with the world view of garden-variety racist, sexual abusing criminals like Trump himself.

Clues about various of these have already been dropped publicly, mostly by Trump’s minions, but occasionally buried within the incoherent rambling rants of Trump himself, which provide for useful verification. 

Pretty much at top of Trump’s technology-related enemies list is Google. The Trump team despises Google with a ferocious antipathy.

Google represents pretty much everything that Trump and his team hates: Information that Trump and his associates can’t control. Intelligent, largely liberal-leaning employees for whom facts and data are not overridden by political exigencies of the moment. Privacy and security teams who won’t bend over and grab their ankles whenever anyone in the government simply says “jump” without appropriate legal authority. And so on.

Trump’s people have a plan to reign in Google. They’ll be going after other service providers as well, but Google would be their biggest prize by far.

The Trump team’s plan to control Google will be on several fronts.

With the assistance of a cooperative GOP Congress and a Supreme Court that will soon have at least one and perhaps three or more right-wing Trump appointees, Trump’s crew will be pressing hard for rules that ban end-to-end encryption, using the usual national security excuse as the main argument, while sweeping aside all “this actually makes us less safe” arguments.

This push will also include the ability for the government to have essentially “on demand” access to any or all server data at Google (and all other significant web firms), based on the models provided by Trump’s master Putin, and to some extent also the Chinese.

Trump has also become incensed at Google search results that don’t toe the line to his own demented and twisted worldview, and intends to push legislation that would permit for government control over search results in a wide variety of circumstances, in this instance using national security, law enforcement, copyright claims, and “save the children” arguments.

The Trump team feels that these efforts will dovetail nicely with broader free speech controls that they plan to aim at mass media, particularly news outlets — there is also talk of attempting to impose horrific EU-style “Right To Be Forgotten” laws here in the U.S. — using this aspect in particular to try suck Google haters over to Trump’s side for the broader legislative efforts.

And if all of this sounds like some sort of fantasy on the Trump side — couldn’t happen with the First Amendment in their way! — think again!

Other than the Second Amendment, the Trumpians are at the best indifferent to most aspects of the Constitution in general or the Bill of Rights in particular.

They believe that they can forge coalitions that will enable them to decimate the First Amendment, leveraging their control over all three branches of government — executive, legislative, and judicial. They believe that their Deplorables — their voters — will cheer Trump on in his efforts to decimate Google, eliminate what Trump and company feel are “undesirable freedoms” aspects of the Internet, and in general impose a speech regime as close to Putin’s model as possible.

But Trump isn’t president quite yet. We still have a bit of time to work with, and there are some approaches that can limit the damage that Trump can do, at least to various extents.

Some of these we will be discussing on my new Saving Science & Tech from Trump Google+ community.

Some discussions will by necessity need to be more private.

One thing’s pretty much certain, however. Donald Trump and his administration hope to roll back the USA effectively to somewhere around 1950 in terms of color, creed, and knowledge. 

If we don’t wish to see the technological works of our lifetimes similarly decimated, we must take action immediately.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

We Stopped Herr Hitler — Now We Must Stop Something Potentially Far Worse: President Trump

G+ Community: Saving Science & Tech from Trump

– – –

As I write these words late in the evening of 8 November 2016, Donald Trump has become the president-elect of the United States.

In 1933, a man named Adolf Hitler, who by all accounts was far more intelligent, refined, educated, and self-controlled than one Donald J. Trump, was appointed chancellor of Germany, a country that at the time was among the world’s leaders in arts and science. Within a few years, he dragged Germany into a maelstrom of racism, death, and horror, with few German fingers raised to stop him.

Luckily, though he was on the path to do so, Hitler never obtained operational nuclear weapons. Nor for that matter was he known to brag about committing sexual assault. He was many horrible things, but he was not an ignoramus.

On the other hand, Hitler’s supporters and Trump’s supporters are very much one of a kind, and history teaches clearly that giving any quarter to such monsters is the fastest route to total annihilation.

We will in coming hours and days hear much talk — as did the citizens of 1933 Germany — about “coming together” for the sake of our country.

When it comes to a President Trump, I reject such calls, and I assert that all ethical Americans should do the same.

To “come together” with such an ignorant and lying man and his minions — a man who is a proponent of sexual assault, of torture, of deep-seated racism and antisemitism — a man who mocks the disabled, who doesn’t believe in science, and who encourages mindless violence and restrictions on freedom of speech — is to lend tacit if not active approval to such abominable attitudes and behaviors. This is a binary decision — there is no middle ground. You either accept the evil and join it — or you fight against it body and soul.

There is a long list of villains — some knowing, some “merely” complicit — who have enabled the rise of the ultimate, perverted horror of a President-Elect Trump.

These include (in no particular order and merely to mention a few): FBI Director James Comey, Vladimir Putin, Julian Assange, news organizations like those of CNN and CBS who played crucial roles in Trump’s rise, Bernie Sanders and his followers along with third-party candidates, and yes, we of the Internet and social media, who provided the means for echo chamber exacerbation of racism and fake news to multiply without bounds in the name of profits.

There is no coming together with the likes of a President Trump and his storm troopers, any more than there can be a coming together with a pit full of lethal cobras, spiders, and rabid hyenas.

All legal means must be employed to stop the damage that a President Trump could and would do to this country and the world. This may include both vast civil disobedience and the leveraging of the technology that we control toward limiting the ability of a President Trump and his appointees to destroy what’s great about the United States of America and the rest of this planet.

A hideous monster like a President Trump, combined with a totally GOP-controlled Congress and likely multiple Supreme Court nominations, empowered by USA military and nuclear capabilities, could easily make Hitler’s Reich look like a playground by comparison.

I had hoped — in fact I had already planned and publicly noted — my intentions to move away from political content postings after this election. I realize now that this will be impossible. I apologize for raising your hopes about this unnecessarily.

I am no longer a young man. I do not intend to sit by for the time I have remaining while simply pontificating about the niceties of technology and tech policy while this country is dragged down into a nightmare that would likely even terrify Adolf Hitler himself.

I will be endeavoring to use any and all legal means available — political, technical, and more — to accomplish as effective as possible a figurative “neutering” of a President Trump and all individuals associated with him, to limit the damage that he and his Deplorables can do to this already great nation.

There cannot be “business as usual” in the face of the existential threat represented by Trump.

I welcome you to join me in this effort.

But if you feel that you will be offended or otherwise upset by my use of my various venues and lists for such purposes — which will now likely be escalating dramatically — I urge you to unfollow or unsubscribe from me now.

We are faced with a form of total war. This war must be fought via legal and peaceful means, so long as we ourselves and our fellow Americans are not threatened with illegal actions or violence by a President Trump or his thugs.

Together,  we shall ultimately prevail against the epitome of ignorance and evil that is Donald J. Trump.

selection_571

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Asking Google Home About George Carlin’s “Seven Dirty Words”

What actually happens when you ask the newly released Google Home Appliance about legendary comedian George Carlin’s famous Seven Words That You Can Never Say on Television? Yeah, let’s give this a try. It turns out that the precise wording of this query seems to be fairly critical. No pun intended. I have not modified the answer in any manner.

UPDATE: The “Google modified” list presented in the audio linked below may apparently only be presented to Google Home Appliance users (even reportedly when filters are disabled), perhaps out of fear that persons in the room might be offended by the “spoken out loud” response. A “pure” list appears to be more routinely presented to users who make the same query by phone (to the same underlying Google Assistant system). Fascinating.

[/audi

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Unreadable Webpages and Crummy Electricity

Hmm. I thought I’d been explicit about this in earlier postings about Google’s New Blogs and other webpages, but apparently not explicit enough. So let’s try again.

Whenever I discuss the problems of the increasing unreadability of webpages, due to font choices, low contrast, and other “form over function” web design choices, I inevitably receive email from folks offering me “helpful” hints to bypass those poorly and shortsightedly designed pages.

Run this theme! Edit this style sheet! Install this add-on! Use this RSS reader! Switch to this browser! And so on …

The thing is — trust me — I already know how to do all this stuff.

I’m not the one I’m concerned about. It’s average users — who read pages in their native formats on the most popular browsers — who are being increasingly disadvantaged.

And most of these users don’t know about these workarounds, and frankly are unlikely to install or use these typically ephemeral bypasses that can break at any time.

By and large, these readability “solutions” are designed for techies, not for ordinary users who sometimes don’t even fully understand the difference between the desktop and a browser. I work with people like this all the time. They’re everywhere, and they’re a rapidly growing category of users.

We techies tend to be blinded by our own science, to the point where we undervalue or simply don’t recognize the disparities between our view of technology and the ways that ordinary, non-techie folks with their own lives use our services and tools.

It’s a disgraceful situation on our part. And it’s our fault.

Most people increasingly view the Internet as they would a refrigerator, or an ordinary TV set. They just expect it to work. And that’s a completely reasonable attitude given how much absolutely necessary day-to-day functionality we’ve pushed onto the Web.

Here’s an analogy.

Imagine if one day your local electrical power company suddenly changed the parameters of the electricity they were sending you, in a manner that mostly caused older equipment to have problems.

So you complain, and the power guys say that they’ve determined that newer equipment works better with the new parameters, and anyone with older equipment should just search around, find, and install special power filters and regulators so that their older equipment will work again.

And you ask when the company asked if anyone wanted them to make these electricity changes.

And they reply that they didn’t ask. They don’t really care much about your demographic of equipment, and they suggest that you can take the electricity or leave it. Thank you for calling. Click.

Now maybe you have the time, skill, and/or money to go out and find the electricity add-ons you need (or install solar power, perhaps). But what if you don’t?

Anyway, I’m sure you see my point.

Electricity delivery of course is usually regulated in various ways by the government, but if the current trends in webpage design continue to selectively disadvantage particular categories of users, it is increasingly likely that the government will get involved in this area, just as they have in other aspects of perceived discrimination and disability concerns.

I don’t know about you, but I’d much prefer that these firms fix these issues themselves, rather than having the government moving in with their own heavy-handed mandated changes that not infrequently cause new problems more than they solve old ones.

But one way or another, the status quo and current webpage design trends are increasingly untenable.

So the choice for these firms seems fairly clear. Either throw the switch yourselves toward better webpage design and viewability choices that won’t leave users behind, or wait for the government to start firing high voltage regulatory lighting bolts your way.

Be seeing you.

–Lauren–
I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!