As night follows day, post-election handwringing about inaccurate polls is inevitable. ‘Why didn’t the pollsters get it right?’ say the media and the public.
In a recent column by my colleague Dr. Adam B. Schaeffer in Campaigns and Elections Magazine, he suggests the time has come to do away with what he calls “traditional polls
It’s an incendiary headline, but what he calls “traditional” polls are simply pre-election polls which rely on the respondent to tell the interviewer how likely they are to vote. On that point there’s no disagreement from us at Competitive Edge Research & Communication. Dr. Schaeffer comes to the same conclusion I came to more than 20 years ago.
I wrote my master’s thesis in 1994 (pipe down with the age comments) on how to predict voter turnout using data from voting lists. Competitive Edge has invested a lot into understanding how to use historical voter turnout data.
We’ve known for decades that asking people whether they are going to vote produces a weak filter. That’s due to “social desirability bias.” People will generally inflate (a nice word for exaggerate or flat-out lie about) their turnout likelihood when asked to self-predict. In a 2013 research paper, 2012 voter survey data nationwide was matched up with the actual turnout and showed that asking people for this information was inaccurate. Pollsters who build their “likely voter” screens using self-reports of turnout likelihood are getting too many non-voters and excluding many who actually vote.
But we don’t need to throw out polls; we need to conduct them properly. That means using listed samples rather than random-digit-dial samples.
It also means taking the time to study the post-election turnout data. After every major election, Competitive Edge analyzes the official voted/non-voted files. We then match it to our polling samples to determine who really voted and who didn’t. This allows us to sharpen our knowledge and refine our approach to modeling turnout.
The result: Competitive Edge only asks one simple turnout likelihood question with a very minimal filter. We let our history-based turnout model do most of the work.
When competitors cut corners it frustrates me just like it frustrates any professional. We’re asked why “our” performance was poor. I’m more disappointed than you are when I see people being inattentive and lazy with the important methodology for conducting research. In medicine, this is called malpractice.
Post 2012, President Obama’s pollster Jim Messina said polling in this country was (messed) up. He meant that most polling isn’t conducted properly, not that it couldn’t be done correctly to get more accurate results. He is absolutely right.
The simplistic “polling is dead” message is clickbait to get people riled up. But polling firms that fail to adopt new techniques like multi-mode data collection (which means using landline telephones, cellphones AND email to reach a proper sample) will produce inaccurate estimates of public opinion. Those firms will eventually die.
Firms using nothing but landline dialing hang on because they are cheap. But as with anything else, you get what you pay for. The best part of the “polling is dead” discussion is that observers are finally starting to realize robo-polling is so low quality as to be worthless. It’s only a matter of time before the robo-dialers bite the dust.
Let’s be clear: no matter how hard pollsters work at it, we have to admit that in survey research, nothing will be perfect. Polling public opinion is a very human endeavor. The pollster doesn’t control the electorate, just the research. But if pollsters are paying attention, not cutting corners and doing their job, they’ll avoid Cantor-esque debacles.