Late today I was sent a “press release” from “Blind: Your Anonymous Workplace Community” (“teamblind”) with the headline:
88.4% of Google Conservatives Feel Their Political Views Not Welcome at Work
along with some response breakdowns of “liberal” – “moderate” – “conservative” and so on.
I wasn’t really familiar with Blind, but I did remember something from August where they claimed that:
65% of Google Employees Are in Favor of Censored Search
These are intriguing numbers, but as an old statistics guy from way back — ever since I read the 1954 (and still a classic) “How to Lie with Statistics” by Darrell Huff — I had to ask myself, what sort of statistically valid methodology is Blind using to gather these numbers?
Turns out — as far as I can tell at this point (and I’m certainly open to being corrected on this if I’m wrong!) — there appears to be no valid statistical methodologies in those surveys at all!
Blind’s primary model, as far as I can determine, is an app that interested users can install where various surveys are offered, and users who want to participate in particular surveys can choose to respond to them.
To help ensure that workplace surveys are answered by actual employees of specific firms, Blind apparently verifies that users have appropriate corporate email addresses.
That serves to try keep random people out of the surveys, but doesn’t make those surveys in any way statistically valid, because they apparently remain fully “self-selected” surveys subject to the well known problems of “self-selection bias” effects.
In other words, you can’t infer any statistical information from these surveys beyond the opinions of the particular people who happened to be interested enough at any particular time to respond, and that will vary greatly depending on the nature of the questions and the types of people predisposed to install the Blind app and participate in any Blind surveys in the first place.
Your basic Statistics 101 course explains why the big polling organizations like Gallup — who do generate statistically valid surveys and polls — use carefully designed mathematical models to determine whom THEY will contact for surveys. They don’t just say “Hey, come on over and vote on this!” That’s why meticulously designed surveys of around 1000 or so people can be extremely accurate even when looking at national issues.
That’s not to say that Blind’s self-selected surveys regarding Google or other firms are worthless — they are indeed snapshots of interested users from subsets of their app’s user community. But that’s all.
It would be a tremendous error to try extrapolate from self-selected Blind surveys to any populations beyond the specific individual app users who chose to respond — so such surveys are essentially worthless for serious analysis or policy planning purposes.
This was true when Darrell Huff wrote his book in the mid-20th century, and it remains just as true today.