“Why do different pollsters come up with different results when asking the same question, such as who’s going to win the 2016 presidential campaign?” I’ve been asked versions of this question a lot.
People at “The Upshot,” the excellent New York Times initiative using data to examine the essence of critical issues in politics, policy and economics, decided to run an experiment giving four respected political pollsters the same data about the 2016 presidential election, and compare their results.
What an awesome exercise! I applaud all four pollsters who took the time to participate and for their willingness to explain in detail the judgment calls they each made to get their results. In this case, the results varied depending on the weighting scheme employed by each pollster and on how each of them identified who exactly is a “likely voter.” The pollsters made different decisions in adjusting the sample.
The result was four different electorates, and four different results. How exactly does a pollster characterize a “likely voter?” There’s no hard and fast answer, and it’s based largely on experience. All four of their methods were completely defensible. No one was “right” and no one was “wrong.”
But which of the four was the most “accurate”? We’ll never know. We won’t know even on November 9 because voters are people and people can change their minds.
What this illustrates so well is that the concept of accuracy in polling should be fuzzy. Poll watchers should accept that polls – even the best of them – are not precise instruments measuring stable subjects. Unfortunately, in the high stakes game of presidential (or any) campaigning, observers fixate on candidates being “up 2” or “down 3” on a daily basis. In fact, the Upshot’s experiment shows that variance in polls – above and beyond the famous margin of sampling error — is to be expected.
More importantly, being accurate is akin to a parlor trick. I’m not saying pollsters are magicians performing slight of hand and fooling people. It’s good to be “accurate.” But accuracy shouldn’t be the only thing the pollster is striving for anyway. It is a bright shiny object that everyone thinks they want, but it won’t help you win an election.
Where a pollster has real value in helping a candidate win is getting to an understanding of why opinion is the way it is. What makes a voter decide to support Hillary Clinton instead of Donald Trump, or vice versa? It’s the “why” that should matter more in election polling, not the “what.”
Once the campaign has a firm grasp of the election’s dynamic, it’s way ahead of its competitors and the pollster has actually helped the client.
A recently released Wall Street Journal/NBC poll ahead of the first head-to-head debate avoided the typical horserace approach. Instead, a sample of likely voters was asked to identify their top concerns about each candidate.
It turns out they don’t give a damn about Hillary’s health or The Donald’s taxes. But you’d never know it based on media coverage, would you? The number one concern among voters about Hillary Clinton is her “Dealings with Syria, Iraq & Libya.” Their top concern about Donald Trump is “Not having the right temperament to be Commander in Chief.”
This information is far more useful to a campaign and provides far more insight to the average observer than simply who’s ahead and who’s behind. It gives strategists the knowledge they can use to change impressions among voters still making up their minds, the so-called “persuadables.” It gives poll readers a richer understanding of the challenges facing their candidate.
Journalists are trained to answer the five Ws in their stories: who, what, where, when, why, plus the H: how. I understand it isn’t the media’s role to help political candidates. But reporters, op-ed writers and even media pollsters would be so much better off if they expended more brainpower on figuring out why Clinton or Trump is up by X percentage rather than simply reporting the numbers of the topline result, the “what.” I know that’s much harder, but it’s where a poll’s real value lies.