Study: Collecting URLs and Other Data About Fake/False News on the Net

Greetings. I have initiated a study to explore the extent of fake/false news on the Internet. Please use the form at:

to report fake or false news found on traditional websites and/or in social media postings.

Any information submitted via this form may be made public after verification, with the exception of your name and/or email address if provided (which will be kept private and will not be used for any purposes other than this study).

URLs anywhere in the world may be reported, but please only report URLs whose contents are in English for now. Please only report URLs that are public and can be accessed without a login being required.

Thank you for participating in this study to better understand the nature and scope of fake/false news on the Net.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Google Home Drops Insightful “Donald Trump Is Definitely Crazy” Search Answer

Two days ago, I uploaded the YouTube video linked below, which recorded the insightful response I received from Google Home to the highly relevant question: “Is Donald Trump Insane?” I noted Google’s accurate appraisal on Google+ and in my various public mailing lists. The next day (yesterday) the response was (and currently is) gone for the same query to Home — replaced by the generic: “I can do a search for that.”

Interestingly, this seems to have only occurred for responses from Google Home itself. The original (text-based) answer is currently still appearing for the same query made by keyboard or voice to Google Search through conventional desktop or mobile means (however, at least for me the response is no longer being spoken out loud — and I had earlier reports that the answer response was spoken on all capable platforms).

Let’s face it — what helps to make the original answer so great is the pacing and inflections of the excellent Google Home synthetic voice! It’s just not the same reading it as text.

There would seem to be only two possibilities for what’s going on.

One possibility is that the normal churning of Google’s algorithms dropped that answer from Home (and replaced it with the generic response) solely through ordinary programmed processes.

Of course, the other possibility is that after I publicized this brilliant, wonderful, and fully accurate spoken response, it was manually excised from Home by someone at Google for reasons of their own, about which I will not speculate here and now.

Either way, the timing of this change, only hours after my release of the related video, is — shall we say — fascinating.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

How Fake and False News Distort Google and Others

With all of the current discussions regarding the false and fake news glut on the Internet — often racist in nature, some purely domestic in origin, some now believed to be instigated by Putin’s Russia — it’s obvious that the status quo for dealing with such materials is increasingly untenable.

But what to do about all this?

As I have previously discussed, my general view is that more information — not less — is the best solution to these distortions that may have easily turned the 2016 election on its head.

Labeling, tagging, and downranking of clearly false or fake posts is an approach that can help to reduce the tendency for outright lies to be treated equivalently with truth in social media and search engines. These techniques also avoid invoking the actual removal of lying items themselves and the “censorship” issues that then may come into play (though private firms quite appropriately are indeed free to determine what materials they wish to permit and host — the First Amendment only applies to governmental restraints on speech in the USA).

How effective might such labeling be? Think about the labeling of “fake news” in the same sort of vein as the health warnings on cigarette packs. We haven’t banned cigarettes. Some people ignore the health warnings, and many people still smoke in the USA. But the number of people smoking has dropped dramatically, and studies show that those health warnings have played a major role in that decrease.

Labeling fake and false news to indicate that status — and there’s a vast array of such materials where no reasonable arguments that they are not untrue can reasonably exist — could have a dramatic positive impact. Controversial? Yep. Difficult? Sure. But I believe that this can be approached gradually, starting with top trending stories and top search results.

A cure-all? No, just as cigarette health warnings haven’t been cure-alls. But many lives have still been saved. And the same applies to dealing with fake news and similar lies masquerading as truthful posts.

Naysayers suggest that it’s impossible to determine what’s true or isn’t true on the Internet, so any attempts to designate anything that’s posted as really true or false must fail. This is nonsense. And while I’ve previously noted some examples (Man landing on the moon, Obama born in Hawaii) it’s not hard to find all manner of politically-motivated lies that are also easy to ferret out as well.

For example, if you currently do a Google search (at least in the USA) for:

southern poverty law center

You will likely find an item on the first page of results (even before some of the SPLC’s own links) from online Alt-Right racist rag Breitbart — whose traditional overlord Steve Bannon has now been given a senior role in the upcoming Trump administration.

The link says:

FBI Dumps Southern Poverty Law Center as Hate Crimes Resource

Actually, this is a false story, dating back to 2014. It’s an item that was also picked up from Breitbart and republished by an array of other racist sites who hate the good work of the SPLC fighting both racism and hate speech.

Now, look elsewhere on that page of Google search results — then on the next few pages. No mention of the fact that the original story is false, that even the FBI itself issued a statement noting that they were still working with the SPLC on an unchanged basis.

Instead of anything to indicate that the original link is promoting a false story, what you’ll mostly find on succeeding pages is more anti-SPLC right-wing propaganda.

This situation isn’t strictly Google’s fault. I don’t know the innards of Google’s search ranking algorithms, but I think it’s a fair bet that “truth” is not a major signal in and of itself. More likely there’s an implicit assumption — which no longer appears to necessarily hold true — that truthful items will tend to rise to the top of search results via other signals that form inputs to the ranking mechanisms.

In this case, we know with absolute certainly that the original story on page one of those results is a continuing lie, and the FBI has confirmed this (in fact, anyone can look at the appropriate FBI pages themselves and categorically confirm this fact as well).

Truth matters. There is no equivalency between truth and lies, or otherwise false or faked information.

In my view, Google should be dedicated to the promulgation of widely accepted truths whenever possible. (Ironic side note: The horrible EU “Right To Be Forgotten” — RTBF — that has been imposed on Google, is itself specifically dedicated to actually hiding truths!)

As I’ve suggested, the promotion of truth over lies could be accomplished both by downranking of clearly false items, and/or by labeling such items as (for example) “DEEMED FALSE” — perhaps along with a link to a page that provides specific evidence supporting that label (in the SPLC example under discussion, the relevant page of the FBI site would be an obvious link candidate).

None of this is simple. The limitations, dynamics, logistics, and all other aspects of moving toward promoting truth over lies in social media and search results will be an enormous ongoing effort — but a critically crucial one.

The fake news, filter bubbles, echo chambers, and hate speech issues that are now drowning the Internet are of such a degree that we need to call a major summit of social media and search firms, experts, and other concerned parties on a multidisciplinary basis to begin hammering out practical industry-wide solutions. Associated working groups should be established forthwith.

If we don’t act soon, we will be utterly inundated by the false “realities” that are being created by evil players in our Internet ecosystems, who have become adept at leveraging our technology against us — and against truth.

There is definitely no time to waste.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Blocked by Lauren (“The Motion Picture”)

With nearly 400K Google+ followers, I’ve needed to block “a few” over the years to keep order in the comment sections of my threads. I’m frequently asked for that list — which of course is composed entirely of public G+ profile information. But as far as I know there is no practical way to export this data in textual form. However, when in doubt, make a video! By the way, I do consider unblocking requests, and frequently unblock previously blocked profiles as a result, depending on specific circumstances. Happy Thanksgiving!

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Facebook, Google, Twitter, and Others: Start Taking Hate Speech Seriously!

Recently, in Crushing the Internet Liars, I discussed issues relating to the proliferation of “fake news” on the Internet (via social media, search, and other means) and the relationship of personalization-based “filter bubbles” and “echo chambers” — among other effects.

A tightly related set of concerns, also rising to prominence during and after the 2016 election, are the even broader concepts of Internet-based hate speech and harassment. The emboldening of truly vile “Alt-Right” and other racist, antisemitic white supremacist groups and users in the wake of Trump’s election has greatly exacerbated these continuing offenses to ethics and decency (and in some cases, represent actual violations of law).

Lately, Twitter has been taking the brunt of public criticism regarding harassment and hate speech — and their newly announced measures to supposedly combat these problems seem to mostly be potentially counterproductive “ostrich head in the sand” tools that would permit offending tweets to continue largely unabated.

But all social media suffers from these problems to one degree or another, and I feel it is fair to say that no major social media firm really takes hate speech and harassment seriously — or at least as seriously as ethical firms must.

To be sure, all significant social media companies provide mechanisms for reporting abusive posts. Some systems pair these with algorithms that attempt to ferret out the worst offenders proactively (though hate users seem to quickly adapt to bypass these as rapidly as the algorithms evolve).

Yet one of the most frequent questions I receive regarding social media is “How do I report an abusive posting?” Another is “I reported that horrible posting days ago, but it’s still there, why?”

The answer to the first question is fairly apparent to most observers — most social media firms are not particularly interested in making their abuse reporting tools clear, obvious, and plainly visible to both technical and nontechnical users of all ages. Often you must know how to access posting submenus to even reach the reporting tools.

For example, if you don’t know what those three little vertical dots mean, or you don’t know to even mouse over a posting to make those dots appear — well, you’re out of luck (this is a subset of a broader range of user interface problems that I won’t delve into here today).

The second question — why aren’t obviously offending postings always removed when reported — really needs a more complex answer. But to put it simply, the large firms have significant problems dealing with abusive postings at the enormous scales of their overall systems, and the resources that they have been willing to put into the reporting and in some cases related human review mechanisms have been relatively limited — they’re just not profit center items.

They’re also worried about false abuse reports of course — either purposeful or accidental — and one excuse used for “hiding” the abuse reporting tools may be to try reduce those types of reports from users.

All that having been said, it’s clear that the status quo when it comes to dealing with hate speech or harassing speech on social media is no longer tenable.

And before anyone has a chance to say, “Lauren, you’re supposed to be a free speech advocate. How can you say this?”

Well, it’s true — I’m a big supporter of the First Amendment and its clauses regarding free speech.

But what is frequently misunderstood, is that this only applies to governmental actions against free speech — not to actions by individuals, private firms, or other organizations who are not governmental entities.

This is one reason why I’m so opposed to the EU’s horrific “Right To Be Forgotten” (RTBF) — it’s governments directly censoring the speech of third parties. It’s very wrong.

Private firms though most certainly do have the right to determine what sorts of speech they choose to tolerate or support on their platforms. That includes newspapers, magazines, conventional television networks, and social media firms, to name but a few.

And I assert that it isn’t just the right of these firms to stamp out hate speech and harassment on their platforms, but their ethical responsibility to do so as well.

Of course, if the Alt-Right or other hate groups (and certainly the right-wing wackos aren’t the only offenders) want to establish their own social media sites for that subset of hate speech that is not actually illegal — e.g. the “Trumpogram” service — they are free to do so. But that doesn’t mean that the Facebooks, Googles, and Twitters of the world need to permit these groups’ filth on their systems.

Abusive postings in terms of hate speech and harassing speech certainly predate the 2016 election cycle, but the election and its aftermath demonstrate that the major social media firms need to start taking this problem much more seriously — right now. And this means going far beyond rhetoric or public relations efforts. It means the implementation of serious tools and systems that will have real and dramatic impacts on helping to stamp out the postings of the hate and other abuse mongers in our midst today.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Unacceptable: How Google Undermines User Trust by Blocking Users from Their Own Data

UPDATE (November 18, 2016): After much public outcry, Google has now reversed the specific Pixel-related Google account bans noted in this post. Unfortunately, the overall Whose data is it? problem discussed in this post persists, and it’s long since time for Google to appropriately address this issue, which continues to undermine public user trust in a fine company.

– – –

There are times when Google is in the right. There are times when Google is in the wrong. By far, they’re usually on the angels’ side of most issues. But there’s one area where they’ve had consistent problems dating back for years: Cutting off users from those users’ own data when there’s a dispute regarding Google Account status.

A new example of this recurring problem — an issue about which I’ve heard from large numbers of Google users over time — has just surfaced. In this case, it involves the reselling of Google Pixel phones in a manner that apparently violates the Google Terms of Service, with the result that a couple of hundred users have reportedly been locked out of their Google accounts and all of their data, at least for now. 

This means that they’re cut off from everything they’ve entrusted to Google — mail, documents, photos, the works.

Here and now, I’m not going to delve into the specifics of this case — I don’t know enough of the details yet. The entire area of Google accounts suspension, closure, recovery, and so on is complex to say the least. Most times (but not always) users are indeed at fault — one way or another — when these kinds of events are triggered. And the difficulty of successfully appealing a Google account suspension or closure has become rather legendary.

Even recovering a Google account due to the loss of a password can be difficult if you haven’t taken proactive steps to aid in that process ahead of time —steps that I’ve previously discussed in detail.

But the problem of what happens to users’ data when they can’t access their accounts — for whatever reasons — is something that I’ve personally been arguing with Google about literally for years, without making much headway at all.

Google has excellent mechanisms for users to download their data while they still have account access. Google even has systems for you to specify someone else who would have access to your account in case of emergency (such as illness or accident), and policies for dealing with access to accounts in case of the death of an account holder.

The reality though, is that users have been taught to trust Google with ever more data that is critical to their lives, and most people don’t usually think about downloading that data proactively.

So when something goes wrong with their account, and they lose access to all of that data, it’s like getting hit with a ton of bricks.

Again, this is not to say that users aren’t often — in fact usually — in the wrong (at least in some respect) when it comes to account problems. 

But unless there is a serious — and I mean serious (like child porn, for example) criminal violation of law — my view is that in most circumstances users should have some means to download their data from their account even if it has been suspended or closed for good cause. 

If they can’t use Google services again afterwards, them’s the breaks. But it’s still their own data we’re talking about, not Google’s.

Google has been incredibly resistant to altering this aspect of their approach to user account problems. I am not ignorant of their general reasoning in this category of cases — but I strongly believe that they are wrong with their essentially one size fits all “death penalty” regime in this context.

Nobody is arguing that there aren’t some specific situations where blocking a violating user (or a user accused of violations) from accessing their data on Google services is indeed justified. But Google doesn’t seem to have any provisions for anything less than total data cutoff when there’s a user account access issue, even when significant relevant legal concerns are not involved.

This continuing attitude by Google does not engender user trust in Google’s stewardship of user data, even though most users will never run afoul of this problem.

These kinds of actions by Google provide ammunition to the Google Haters and are self-damaging to a great firm and the reputation of Googlers everywhere, some of whom have related to me their embarrassment at trying to explain such stories to their own friends and families.

Google must do better when it come to this category of user account issues. And I’ll keep arguing this point until I’m blue in the face and my typing fingertips are bruised. C’mon Google, please give me a break!

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Crushing the Internet Liars

Frankly, this isn’t the post that I had originally intended. I had a nearly completed blog draft spinning away happily on a disk, a draft that presented a rather sedate, scholarly, and a bit introspective discussion of how Internet-based communications evolved to reach the crisis point we now see regarding misinformation, filter bubbles, and so-called echo chambers in search and social media.

I just trashed that draft. Bye!

Numerous times over the years I’ve tried the scholarly approach in various postings regarding the double-edged sword of Internet personalization systems — capable of bringing both significant benefits to users but also carrying significant and growing risks.

Well, given where we stand today after the 2016 presidential election, it appears that I might have just as well been doing almost anything else rather than bothering to write that stuff. Toenail trimming would have likely been a more fruitful use of my time.

So now — today — we must deal with this situation while various parties are hell-bent toward turning the Internet into a massive, lying propaganda machine to subvert not only the democratic process, but our very sense of reality itself.

Much of this can be blamed on the concept of “false equivalency” — which runs rampant on cable news, mainstream Internet news sites, and throughout social media such as Facebook (which is taking the brunt of criticism now — and rightly so), plus on other social media ecosystems.

Fundamentally, this idea holds that even if there is widespread agreement that a particular concept is fact, you are somehow required to give “equal time” to wacko opposing views.

This is why you see so much garbage prominently surfaced from nutcases like Alex Jones — who believes the U.S. government blew up the World Trade Center buildings — or Donald Trump and his insane birther attacks on Obama, that Trump used to jump-start his presidential campaign. It doesn’t take more than half a brain to know that such statements are hogwash.

To be sure, it’s difficult to really know whether such perpetually lying creatures actually believe what they’re saying — or are simply saying outrageous things as leverage for publicity. In the final analysis though, it doesn’t much matter what their motives really are, since the damage done publicly is largely equivalent either way.

The same can be said for the wide variety of fake postings and fake news sites that increasingly pollute the Net. Do they believe what they say, or are they simply churning out lies on a twisted “the end justifies the means” basis? Or are they just sick individuals promoting hate (often racism and antisemitism) and chaos? No doubt all of the above apply somewhere across the rapidly growing range of offenders, some of whom are domestic in nature, and some who are obviously operating under the orders of foreign leaders such as Russia’s Putin.

Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.

And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.

Facebook’s Mark Zuckerberg is publicly denying that Facebook has a significant role in the promotion of lies. He denies that Facebook’s algorithms for controlling which postings users see creates echo chambers where users only see what they already believe, causing lies and distortions to spread ever more widely without truth having a chance to invade those chambers. But Facebook’s own research tells a very different story, because Facebook insists that exactly those kinds of controlling effects occur to the benefit of Facebook’s advertisers.

Yet this certainly isn’t just a Facebook problem. It covers the gamut of social media and search.

And the status quo can no longer be tolerated.

So where do we go from here?

Censorship is not a solution, of course. Even the looniest of lies, assuming the associated postings are not actually violating laws, should not be banned from visibility.

But there is a world of difference between a lying post existing, vis-a-vis the actual widespread promotion of those lies by search and social media.

That is, simply because a lying post is being viewed by many users, there’s no excuse for firms’ algorithms to promote such a post to a featured or other highly visible status, creating a false equivalency of legitimacy by virtue of such lies being presented in close proximity to actual facts.

This problem becomes particularly insidious when combined with personalization filter bubbles, because the true facts are prevented from penetrating users’ hermetically sealed social media worlds that have filled with false postings.

And it gets worse. Mainstream media in a 24/7 news cycle is hungry for news, and all too often now, the lies that germinate in those filter bubbles are picked up by conventional media and mainstream news sites as if they were actual true facts. And given the twin realities of reduced budgets and beating other venues to the punch, such lies frequently are broadcast by such sites without any significant prior fact checking at all.

So little by little, our sense of what is actually real — the differences between truth and lies — becomes distorted and diluted.

Again, censorship is not the answer.

My view is that more information — not less information — is the path toward reducing the severity of these problems.

Outright lies must not continue to be given the same untarnished prominence as truth in search results and in widely seen social media postings.

There are multiple ways to achieve this result.

Lying sites in search results can be visibly and prominently tagged as such in those results, be downranked, or both. Similar principles can apply to widely shared social media posts that currently are featured and promoted by social media sites primarily by virtue of the number of persons already viewing them. Because — lets face it — people love to view trash. Lots of users viewing and sharing a post does not make it any less of a lie.

As always, the devil is in the details.

This will be an enormously complex undertaking, involving technology, policy, public relations, and the law. I won’t even begin to delve into the details of all this here, but I believe that with sufficient effort — effort that we must now put forth — this is a doable concept.

Already, whenever such concepts are brought up, you quickly tend to hear the refrain: “Who are you to say what’s a fact and what’s a lie?”

To which I reply: “To hell with false equivalences!”

Man landed on the moon. Obama was born in Hawaii. Terrorists destroyed the World Trade Center with jet aircraft. Hillary Clinton never said she was going to abolish the Second Amendment. Donald Trump did say that he supported the war in Iraq. Denzel Washington did not say that he supported Trump. On and on and on.

There are a virtually endless array of stated facts that reasonable people will agree are correct. And if the nutcases want to promote their own twisted views on these subjects that’s also fine — but those postings should be clearly labeled for what they are — not featured and promoted. As the saying goes, they’re free to have their own opinions, but not their own facts.

Obviously, this leaves an enormous range of genuinely disputed issues where the facts are not necessarily clear, often where only opinion and/or philosophy really apply. That’s fine too. They’re out of scope for these discussions or efforts.

But the outright Internet liars must be crushed. They shouldn’t be censored, but they must no longer be permitted to distort and contaminate reality by being treated on an equal footing with truth by major search and social media firms.

We built the Internet juggernaut. Now it’s our job to fix it where it’s broken.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

President Trump and the Nuclear Launch Codes



I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Elections and the Internet “Echo Chambers”

Back in a 2010 blog post, I noted the kinds of “echo chamber” effects that can result from personalization and targeting of various types of information on the Web. That particular posting concentrated on search personalization, but also noted the impact on Internet-based discussions, a situation that has become dramatically more acute with the continuing rise of social media. Given the current controversies regarding how “filter bubbles” and other algorithmically-driven information surfacing and restriction systems may impact users’ views and potentially increase political and religious radicalization — particularly in relation to the 2016 elections here in the USA — I believe it is relevant to republish that posting today, which is included below. Also of potential interest is my recently reposted item related to Internet fact checking.

Search Personalization: Blessing and Trap?
(Original posting date: September 16, 2010)

Greetings. Arguably the holy grail of search technology — and of many other aspects of Internet-based services today, is personalization. Providing users with personalized search suggestions, search results, news items, or other personalized services as quickly as possible, while filtering out “undesired” information, is a key focus not only of Google but of other enterprises around the world.

But does too much reliance on personalization create an “echo chamber” effect, where individuals are mainly (or perhaps totally) exposed to information that only fits their predetermined views? And if so, is this necessarily always beneficial to those individuals? What about for society at large?

Diversity of opinions and information is extremely important, especially today in our globally interconnected environment. When I do interviews on mainstream radio programs about Internet issues, it’s usually on programs where the overall focus is much more conservative than my own personal attitudes. Yet I’ve found that even though there’s often a discordance between the preexisting views of most listeners and my own sentiments, I typically get more insightful questions during those shows than in the venues where I spend most of my time online.

And one of the most frequent questions I get afterwards from listeners contacting me by email is: “How come nobody explained this to me that way before?”

The answer usually is that personalized and other limited focus information sources (including some television news networks) never exposed those persons to other viewpoints that might have helped them fully understand the issues of interest.

An important aspect of search technology research should include additional concentration on finding ways to avoid potential negative impacts from personalized information sources — particularly when these have the collateral effect of “shutting out” viewpoints, concepts, and results that would be of benefit both to individuals and to society.

Overall, I believe that this is somewhat less of a concern with “direct” general topic searches per se, at least when viewed as distinct from search suggestions. But as suggestions and results become increasingly commingled, this aspect also becomes increasingly complex. (I’ve previously noted my initial concerns in this respect related to the newly deployed Google Instant system).

Suggestions would seem to be an area where “personalization funneling” (I may be coining a phrase with this one) would be of more concern. And in the world of news searches as opposed to general searches, there are particularly salient related issues to consider (thought experiment: if you get all of your information from FOX News, what important facts and contexts are you probably missing?)

While there are certainly many people who (for professional or personal reasons) make a point to find and cultivate varied and opposing opinions, not doing so becomes much easier — and seemingly more “natural” — in the Internet environment. At least the possibility of serendipitous exposure to conflicting points of view was always present when reading a general audience newspaper or magazine, for example. But you can configure many Web sites and feeds to eliminate all but the narrowest of opinions, and some personalization tools are specifically designed to enhance this effect.

As our search and related tools increasingly focus on predicting what we want to see and avoiding showing us anything else (which naturally enough makes sense if you want to encourage return visits and show the most “attractive” ads to any given individual), the funneling effect of crowding out other materials of potential value appears to be ever more pronounced.

Add to that the “preaching to the choir” effect in many Internet discussions. True, there are forums with vibrant exchanges of views and conflicting opinions. But note how much of our Twitter and Buzz feeds are depressingly dominated by a chorus of “Attaboy!” yells from “birds of a feather” like-minded participants.

I am increasingly concerned that technologically-based Internet personalization — despite its many extremely positive attributes — also carries with it the potential for significant risks that are apparently not currently receiving the research and policy attention that they deserve.

If we do choose to assign some serious thinking to this dilemma, we certainly have the technological means to adjust our race toward personalization in ways that would help to balance out the equation.

This definitely does not mean giving up the benefits of personalization. However, we can choose to devote some of the brainpower currently focused on figuring out what we want to see, and work also toward algorithms that can help determine what we need to see.

In the process, this may significantly encourage society’s broader goals of cooperation and consensus, which of necessity require — to some extent at least — that we don’t live our entire lives in confining information silos, ironically even while we’re surrounded by the Internet’s vast buffet of every possible point of view.

 – – –

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Home Will Change the World

Much has recently been written about Google Home, the little vase-like cylinder that started landing in consumers’ hands only a week or so ago. Home’s mandate sounds simple enough in theory — listen to a room for commands or queries, then respond by voice and/or with appropriate actions.

What hasn’t been much discussed however, is how the Home ecosystem is going to change for the better the lives of millions to billions of people over time, in ways that most of us couldn’t even imagine today. It will drastically improve the lives of vast numbers of persons with visual and/or motor impairments, but ultimately will dramatically and positively affect the lives of everyone else as well.

Home isn’t the first device to offer this technology segment — nor is it the least expensive — Amazon came earlier and has a more limited version that is cheaper than Home (and a model more expensive than Home as well).

But while Amazon’s device seems to have been designed with buying stuff on Amazon as its primary functionality, Google’s Home — backed by Google’s enormously more capable corpus of information, accurate speech recognition, and AI capabilities, stands to quickly evolve to far outpace Amazon’s offering along all vectors.

This is a truth even if we leave aside the six-month free subscription to Google’s excellent ad-free “YouTube Red/Google Play Music” — which Google included with my Home shipment here in the USA, knowing that once you’ve tasted the ability to play essentially any music and any YouTube videos at any time just by speaking to the air, you’ll have a difficult time living without it. I’ve had Home for a week and I’m finally listening to great music of all genres again — I know that I’ll be subscribing when my free term to that package runs out.

You can dig around a bit and easily find a multitude of reviews that discuss specifics of what Home does and how you use it, so I’m not going to spend time on that here, other than to note that like much advanced technology that is simple to operate, the devilishly complex hardware and software design aspects won’t be suspected or understood by most users — nor is there typically a need for them to do so.

But what I’d like to ponder here is why this kind of technology is so revolutionary and why it will change our world.

Throughout human history, pretty much any time you wanted information, you had to physically go to it in one way or another. Dig out the scroll. Locate the book. Sit down at the computer. Grab the smartphone.

The Google Home ecosystem is a sea change. It’s fundamentally different in a way that is much more of a giant leap than the incremental steps we usually experience with technology.

Because for the first time in most of our experiences, rather than having to go to the information, the information is all around us, in a remarkably ambient kind of way.

Whether you’re sitting at a desk at noon or in bed sleepless in the middle of the night, you have but to verbally express your query or command, and the answers, the results, are immediately rendered back to you. (Actually, you first speak the “hotword” — currently either “Hey Google” or “OK Google” — followed by your command or query. Home listens locally for the hotword and only sends your following utterance up to Google for analysis when the hotword triggers — which is also indicated by lights on the Home unit itself. There’s also a switch on the back of the device that will disable the microphone completely.)

It’s difficult to really express how different this is from every other technology-based information experience. In a matter of hours of usage, one quickly begins to think of Home as a kind of friendly ethereal entity at your command, utterly passive until invoked. It becomes very natural to use — the rapid speed of adaptation to using Home is perhaps not so remarkable when you consider that speech is the human animal’s primary evolved mode of communications. Speech works with other humans, to some extent with our pets and other animals — and it definitely works with Google Home.

Most of the kinds of commands and queries that you can give to Home can also be given to your smartphone running Google’s services — in fact they both basically access the same underlying “Google Assistant” systems.

But when (for example) information and music are available at any time, at the spur of the moment, for any need or whim — just by speaking wherever you happen to be in a room and no matter the time of day — it’s really an utterly different emotional effect.

And it’s an experience that can easily make one realize that the promised 21st century really has now arrived, even if we still don’t have the flying cars.

The sense of science fiction come to life is palpable.

The Google teams who created this tech have made no secret of the fact that the computers of “Star Trek” have been one of their key inspirations.

There are various even earlier scifi examples as well, such as the so-called “City Fathers” computers in James Blish’s “Cities in Flight” novels. 

It’s obvious how Google Home technology can assist the blind, persons with other visual impairments, and a wide variety of individuals with mobility restrictions.

Home’s utility in the face of simple aging (and let’s face it, we’re all either aging or dead) is also immense. As I noted back in As We Age, Smartphones Don’t Make Us Stupid — They’re Our Saviors, portable information aids can be of great value as we get older.

But Home’s “always available” nature takes this to an entirely new and higher level.

The time will come when new homes will be built with such systems designed directly into their walls, and when people may feel a bit naked in locations where such capabilities are not available. And in fact, in the future this may be the only way that we’ll be able to cope with the flood of new and often complex information that is becoming ever more present in our daily lives.

Perhaps most telling of all is the fact that these systems — as highly capable as they are right now — are only at the bare beginnings of their evolution, an evolution that will reshape the very nature of the relationship between mankind and access to information.

If you’re interested in learning more about all this, you’re invited to join my related Google+ Community which is covering a wide range of associated topics.

Indeed — we really are living in the 21st century!

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Google Search Results and Fact Checking

With so many discussions now raging regarding the impacts of misinformation on the Internet — including in relation to the 2016 election — I’m reposting below a blog item of mine from 17 June 2007 — “Extending Google Blacklists for Dispute Resolutions” — that may perhaps still be considered relevant today.

At that time, I was framing this overall issue in terms of disputed search results — I would later propose this kind of framework as a possible alternative to the horrific EU “Right To Be Forgotten” censorship concept.

We now would likely include most of these issues under the broader umbrella of “fact checking” concepts.

Extending Google Blacklists for Dispute Resolutions
(Original posting date: June 17, 2007)

Greetings. In a very recent blog item, I discussed some issues regarding search engine dispute resolution, and posed some questions about the possibility of “dispute links” being displayed with search results to indicate serious disputes regarding the accuracy of particular pages, especially in cases of court-determined defamation and the like.

While many people appear to support this concept in principle, the potential operational logistics are of significant concern. As I originally acknowledged, it’s a complex and tough area, but that doesn’t make it impossible to deal with successfully either.

Some others respondents have taken the view that search engines should never make “value judgments” about the content of sites, other than that done (which is substantial) for result ranking purposes.

What many folks may not realize is that in the case of Google at least, such more in-depth judgments are already being made, and it would not necessarily be a large leap to extend them toward addressing the dispute resolution issues I’ve been discussing.

Google already puts a special tag on sites in their results which Google believes contain damaging code (“malware”) that could disrupt user computers. Such sites are tagged with a notice that “This website may damage your computer.” — and the associated link is not made active (that is, you must enter it manually or copy/paste to access that site — you cannot just click).

Also, in conjunction with Google Toolbar and Firefox 2, Google collects user feedback about suspected phishing sites, and can display warnings to users when they are about to access potentially dangerous sites on these lists.

In both of these cases, Google is making a complex value judgment concerning the veracity of the sites and listings in question, so it appears that this horse has already left the barn — Google apparently does not assert that it is merely a neutral organizer of information in these respects.

So, a site can be tagged by Google as potentially dangerous because it contains suspected malware, or because it has been reported by the community to be an apparent phishing site. It seems reasonable then for a site that has been determined (by a court or other agreed-upon means) to be containing defaming or otherwise seriously disputed information, to also be potentially subject to similar tagging (e.g. with a “dispute link”).

Pages that contain significant, purposely false information, designed to ruin people’s reputations or cause other major harm, can be just as dangerous as phishing or malware sites. They may not be directly damaging to people’s computers, but they can certainly be damaging to people’s lives. And presumably we care about people at least as much as computers, right?

So I would assert that the jump to a Google “dispute links” mechanism is nowhere near as big a leap from existing search engine results as it may first appear to be.

In future discussion on this topic, I’ll get into more details of specific methodologies that could be applicable to the implementation of such a dispute handling system, based both within the traditional legal structure and through more of a “Web 2.0” community-based topology.

But I wanted to note now that while such a search engine dispute resolution environment could have dramatic positive effects, it is fundamentally an evolutionary concept, not so much a revolutionary one.

More later. Thanks as always.

 – – –

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Why Google Tops Trump’s Technology Enemies List

As something of a student of the great Chinese general Sun Tzu, who lived between around 544 BC and 496 BC, I have long agreed with one of the most famous statements attributed to him: 

“If you know the enemy and know yourself, you need not fear the result of a hundred battles.”

With that truism in mind, I have throughout the last few months of the 2016 election season kept channels of communications open with persons directly knowledgeable of soon to be President Trump’s handlers views on technology policy.

We know that Trump himself is a dilapidated dunce bending in whatever direction the current breezes seem to blow from minute to minute. But the advisers holding his leash — who will ultimately set the policy objectives for this senile swine (no offense meant to actual hogs or pigs!), have very definite views that they intend to push into Trump’s orbit. For all practical purposes, we can expect these to fill the empty vessel of Trump’s skull and become essentially his own.

The laundry list of attacks that they have planned is long and diverse, and is essentially a war against all manner of science, technology, and anybody supporting scientific concepts that conflict with the world view of garden-variety racist, sexual abusing criminals like Trump himself.

Clues about various of these have already been dropped publicly, mostly by Trump’s minions, but occasionally buried within the incoherent rambling rants of Trump himself, which provide for useful verification. 

Pretty much at top of Trump’s technology-related enemies list is Google. The Trump team despises Google with a ferocious antipathy.

Google represents pretty much everything that Trump and his team hates: Information that Trump and his associates can’t control. Intelligent, largely liberal-leaning employees for whom facts and data are not overridden by political exigencies of the moment. Privacy and security teams who won’t bend over and grab their ankles whenever anyone in the government simply says “jump” without appropriate legal authority. And so on.

Trump’s people have a plan to reign in Google. They’ll be going after other service providers as well, but Google would be their biggest prize by far.

The Trump team’s plan to control Google will be on several fronts.

With the assistance of a cooperative GOP Congress and a Supreme Court that will soon have at least one and perhaps three or more right-wing Trump appointees, Trump’s crew will be pressing hard for rules that ban end-to-end encryption, using the usual national security excuse as the main argument, while sweeping aside all “this actually makes us less safe” arguments.

This push will also include the ability for the government to have essentially “on demand” access to any or all server data at Google (and all other significant web firms), based on the models provided by Trump’s master Putin, and to some extent also the Chinese.

Trump has also become incensed at Google search results that don’t toe the line to his own demented and twisted worldview, and intends to push legislation that would permit for government control over search results in a wide variety of circumstances, in this instance using national security, law enforcement, copyright claims, and “save the children” arguments.

The Trump team feels that these efforts will dovetail nicely with broader free speech controls that they plan to aim at mass media, particularly news outlets — there is also talk of attempting to impose horrific EU-style “Right To Be Forgotten” laws here in the U.S. — using this aspect in particular to try suck Google haters over to Trump’s side for the broader legislative efforts.

And if all of this sounds like some sort of fantasy on the Trump side — couldn’t happen with the First Amendment in their way! — think again!

Other than the Second Amendment, the Trumpians are at the best indifferent to most aspects of the Constitution in general or the Bill of Rights in particular.

They believe that they can forge coalitions that will enable them to decimate the First Amendment, leveraging their control over all three branches of government — executive, legislative, and judicial. They believe that their Deplorables — their voters — will cheer Trump on in his efforts to decimate Google, eliminate what Trump and company feel are “undesirable freedoms” aspects of the Internet, and in general impose a speech regime as close to Putin’s model as possible.

But Trump isn’t president quite yet. We still have a bit of time to work with, and there are some approaches that can limit the damage that Trump can do, at least to various extents.

Some of these we will be discussing on my new Saving Science & Tech from Trump Google+ community.

Some discussions will by necessity need to be more private.

One thing’s pretty much certain, however. Donald Trump and his administration hope to roll back the USA effectively to somewhere around 1950 in terms of color, creed, and knowledge. 

If we don’t wish to see the technological works of our lifetimes similarly decimated, we must take action immediately.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

We Stopped Herr Hitler — Now We Must Stop Something Potentially Far Worse: President Trump

G+ Community: Saving Science & Tech from Trump

– – –

As I write these words late in the evening of 8 November 2016, Donald Trump has become the president-elect of the United States.

In 1933, a man named Adolf Hitler, who by all accounts was far more intelligent, refined, educated, and self-controlled than one Donald J. Trump, was appointed chancellor of Germany, a country that at the time was among the world’s leaders in arts and science. Within a few years, he dragged Germany into a maelstrom of racism, death, and horror, with few German fingers raised to stop him.

Luckily, though he was on the path to do so, Hitler never obtained operational nuclear weapons. Nor for that matter was he known to brag about committing sexual assault. He was many horrible things, but he was not an ignoramus.

On the other hand, Hitler’s supporters and Trump’s supporters are very much one of a kind, and history teaches clearly that giving any quarter to such monsters is the fastest route to total annihilation.

We will in coming hours and days hear much talk — as did the citizens of 1933 Germany — about “coming together” for the sake of our country.

When it comes to a President Trump, I reject such calls, and I assert that all ethical Americans should do the same.

To “come together” with such an ignorant and lying man and his minions — a man who is a proponent of sexual assault, of torture, of deep-seated racism and antisemitism — a man who mocks the disabled, who doesn’t believe in science, and who encourages mindless violence and restrictions on freedom of speech — is to lend tacit if not active approval to such abominable attitudes and behaviors. This is a binary decision — there is no middle ground. You either accept the evil and join it — or you fight against it body and soul.

There is a long list of villains — some knowing, some “merely” complicit — who have enabled the rise of the ultimate, perverted horror of a President-Elect Trump.

These include (in no particular order and merely to mention a few): FBI Director James Comey, Vladimir Putin, Julian Assange, news organizations like those of CNN and CBS who played crucial roles in Trump’s rise, Bernie Sanders and his followers along with third-party candidates, and yes, we of the Internet and social media, who provided the means for echo chamber exacerbation of racism and fake news to multiply without bounds in the name of profits.

There is no coming together with the likes of a President Trump and his storm troopers, any more than there can be a coming together with a pit full of lethal cobras, spiders, and rabid hyenas.

All legal means must be employed to stop the damage that a President Trump could and would do to this country and the world. This may include both vast civil disobedience and the leveraging of the technology that we control toward limiting the ability of a President Trump and his appointees to destroy what’s great about the United States of America and the rest of this planet.

A hideous monster like a President Trump, combined with a totally GOP-controlled Congress and likely multiple Supreme Court nominations, empowered by USA military and nuclear capabilities, could easily make Hitler’s Reich look like a playground by comparison.

I had hoped — in fact I had already planned and publicly noted — my intentions to move away from political content postings after this election. I realize now that this will be impossible. I apologize for raising your hopes about this unnecessarily.

I am no longer a young man. I do not intend to sit by for the time I have remaining while simply pontificating about the niceties of technology and tech policy while this country is dragged down into a nightmare that would likely even terrify Adolf Hitler himself.

I will be endeavoring to use any and all legal means available — political, technical, and more — to accomplish as effective as possible a figurative “neutering” of a President Trump and all individuals associated with him, to limit the damage that he and his Deplorables can do to this already great nation.

There cannot be “business as usual” in the face of the existential threat represented by Trump.

I welcome you to join me in this effort.

But if you feel that you will be offended or otherwise upset by my use of my various venues and lists for such purposes — which will now likely be escalating dramatically — I urge you to unfollow or unsubscribe from me now.

We are faced with a form of total war. This war must be fought via legal and peaceful means, so long as we ourselves and our fellow Americans are not threatened with illegal actions or violence by a President Trump or his thugs.

Together,  we shall ultimately prevail against the epitome of ignorance and evil that is Donald J. Trump.


I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Asking Google Home About George Carlin’s “Seven Dirty Words”

What actually happens when you ask the newly released Google Home Appliance about legendary comedian George Carlin’s famous Seven Words That You Can Never Say on Television? Yeah, let’s give this a try. It turns out that the precise wording of this query seems to be fairly critical. No pun intended. I have not modified the answer in any manner.

UPDATE: The “Google modified” list presented in the audio linked below may apparently only be presented to Google Home Appliance users (even reportedly when filters are disabled), perhaps out of fear that persons in the room might be offended by the “spoken out loud” response. A “pure” list appears to be more routinely presented to users who make the same query by phone (to the same underlying Google Assistant system). Fascinating.


I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

Unreadable Webpages and Crummy Electricity

Hmm. I thought I’d been explicit about this in earlier postings about Google’s New Blogs and other webpages, but apparently not explicit enough. So let’s try again.

Whenever I discuss the problems of the increasing unreadability of webpages, due to font choices, low contrast, and other “form over function” web design choices, I inevitably receive email from folks offering me “helpful” hints to bypass those poorly and shortsightedly designed pages.

Run this theme! Edit this style sheet! Install this add-on! Use this RSS reader! Switch to this browser! And so on …

The thing is — trust me — I already know how to do all this stuff.

I’m not the one I’m concerned about. It’s average users — who read pages in their native formats on the most popular browsers — who are being increasingly disadvantaged.

And most of these users don’t know about these workarounds, and frankly are unlikely to install or use these typically ephemeral bypasses that can break at any time.

By and large, these readability “solutions” are designed for techies, not for ordinary users who sometimes don’t even fully understand the difference between the desktop and a browser. I work with people like this all the time. They’re everywhere, and they’re a rapidly growing category of users.

We techies tend to be blinded by our own science, to the point where we undervalue or simply don’t recognize the disparities between our view of technology and the ways that ordinary, non-techie folks with their own lives use our services and tools.

It’s a disgraceful situation on our part. And it’s our fault.

Most people increasingly view the Internet as they would a refrigerator, or an ordinary TV set. They just expect it to work. And that’s a completely reasonable attitude given how much absolutely necessary day-to-day functionality we’ve pushed onto the Web.

Here’s an analogy.

Imagine if one day your local electrical power company suddenly changed the parameters of the electricity they were sending you, in a manner that mostly caused older equipment to have problems.

So you complain, and the power guys say that they’ve determined that newer equipment works better with the new parameters, and anyone with older equipment should just search around, find, and install special power filters and regulators so that their older equipment will work again.

And you ask when the company asked if anyone wanted them to make these electricity changes.

And they reply that they didn’t ask. They don’t really care much about your demographic of equipment, and they suggest that you can take the electricity or leave it. Thank you for calling. Click.

Now maybe you have the time, skill, and/or money to go out and find the electricity add-ons you need (or install solar power, perhaps). But what if you don’t?

Anyway, I’m sure you see my point.

Electricity delivery of course is usually regulated in various ways by the government, but if the current trends in webpage design continue to selectively disadvantage particular categories of users, it is increasingly likely that the government will get involved in this area, just as they have in other aspects of perceived discrimination and disability concerns.

I don’t know about you, but I’d much prefer that these firms fix these issues themselves, rather than having the government moving in with their own heavy-handed mandated changes that not infrequently cause new problems more than they solve old ones.

But one way or another, the status quo and current webpage design trends are increasingly untenable.

So the choice for these firms seems fairly clear. Either throw the switch yourselves toward better webpage design and viewability choices that won’t leave users behind, or wait for the government to start firing high voltage regulatory lighting bolts your way.

Be seeing you.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!

How to Copy Text from the New Low Contrast Google Blogs

A number of people have been contacting me since I noted the awful new low contrast text in the new Google Blogs, mentioning that they could no longer copy text from the blog pages to paste onto blank pages as ordinary easily visible fonts.

Many people use text copying as a fallback method for viewing otherwise difficult to read or unreadable pages, as a “lowest common denominator” method that usually always works — even when various page reading add-ons break due to layout changes.

In the case of the new Google Blogs, at least on some platforms, Google’s fancy new low contrast pages also include various tricks that cause some users’ left-mouse-click copy text command that they’ve been using forever to now fail. There are a couple of approaches to dealing with this.

As a general rule, disabling JavaScript on offending pages of these sorts (not just on Google pages) will help, though this can be tricky for some users and can sometimes have serious undesirable operational side-effects.

If your goal is only to copy out text after selecting it (and yes, selecting still works on the new Google Blogs), the pretty much standardized Control-C keyboard shortcut will usually copy the selected text into your clipboard, and you can then past it out onto another page using the regular mouse paste command (or the Control-V keyboard shortcut).

Of course, none of this would be necessary if Google hadn’t joined this bizarre design craze sweeping the Net and making webpages ever more unreadable for ever more users, with a hopelessly narrow-minded “one size fits all, form over function” philosophy.

But you’ll need to talk to Google about that. I’ve already done so — to no useful effect. Perhaps you’ll have better luck than I have. But I won’t be holding my breath.

I have consulted to Google, but I am not currently doing so — my opinions expressed here are mine alone.
– – –
The correct term is “Internet” NOT “internet” — please don’t fall into the trap of using the latter. It’s just plain wrong!