The Edge: Competitive Edge Blog

The Trump Effect: Trust in Polling

Print Friendly

The Trump Effect: Trust in Polling

We know the story well by now. Pre-election polls had Hillary Clinton ahead by 3-4 percentage points nationally, and though her lead was less certain in the “rust belt” states, her victory was all but assured. Wisconsin hadn’t voted Republican in a presidential election since Ronald Reagan’s triumph 32 years prior. Michigan had been solidly blue for 24 years. Although Donald Trump – a businessman for heaven’s sake — exceeded all expectations by simply being competitive, all the signs pointed to the Democrats’ “blue wall” keeping him out of the White House.

Except that vaunted blue wall did come crumbling down, opening a breach that led to Trump’s electoral college victory and, as the story has it, proving pollsters wrong and instigating a crisis in the polling industry. But did Trump’s victory prove the pollsters wrong? And is there a crisis in the polling industry?

Let’s first look at the reasons “experts” have given for why the polls got it “so wrong,” then we’ll look at commentary by Nate Silver who argues that pollsters didn’t screw up after all. Maybe, according to Silver, the media folded late-October polls in to a biased narrative that was glossing over what the polls were actually saying. And last, we’ll draw an important conclusion: narrative frames can indeed trump sound scientific polling, but only if we let them.

So, why are people saying the polls got the 2016 Presidential Election so wrong?

Late Deciders. The swing toward Trump in the rust belt was late-breaking, less than ten days before the election, and pushed along by FBI Director Comey’s re-opening of the Clinton e-mail investigation. Though public opinion was shifting against Clinton, there weren’t enough late polls to pick it up and change the conventional wisdom.

This obviously wasn’t polling’s fault. It was a rough and tumble race with haymakers being thrown by both candidates up until Election day. When people change their minds or make them up late, as people are liable to do, the problem is the over-interpretation of polling results that show that indecisiveness. Good polling should show the amount of “play” there is in the electorate, and perhaps pollsters should do a better job of pointing that out. But the late-breakers will always pose a problem if they don’t break the same way that early deciders have.

Truth Telling. A substantial number of Trump voters either weren’t predisposed to take part in polling, or weren’t inclined to report the veracity of their voting preference to pollsters. Reasons range from a supposed contempt for pollsters to an unwillingness to publicly admit to supporting Trump.

I’m not impressed with this argument. First, the national poll I trust got the result right. Gary Langer’s ABC News tracking poll had the race dead on, which meant Hillary won the popular vote in both his poll just like she did in the nation. Unless you want to postulate there were shy Trump voters in the few swing states that mattered, but not in the rest of the nation, the argument doesn’t hold water. Second, in on-line and telephone polls Competitive Edge conducted, Trump’s support was not under-represented in less anonymous telephone interviews.

Electorate Modeling. Polling models as to what the actual electorate looks like are generally constructed from exit polls conducted after prior elections. The polling industry suffers from a blind spot – that turnout in 2012 might not look like turnout in 2016 — skewing models of the electorate.

There could be some truth to this. It is very difficult for a pollster to precisely predict the make-up of the eventual electorate. To the extent that the media polling organizations blindly followed 2012 exit poll results – which themselves have some sampling error in them – that’s a problem. Competitive Edge uses actual results for our models, so our electorate modeling lessens this problem.

Poor Quality State Polls. Some of the polls conducted at the state level in Wisconsin, Michigan and Pennsylvania simply weren’t up to scientifically sound polling standards. This threw off some state polling averages.

Most of the polling that made it into the polling averages was conducted by reputable research outfits. However, a few firms might not have been so careful. In the hectic waning days of the race when polling organizations are churning out lots of data, quality might be an issue. This leads to…

Not Enough Rust Belt Polling. There simply wasn’t enough public polling in swing states to obtain conclusive data about an election in flux. In several key states, including Wisconsin and Pennsylvania, there were few polls conducted in the final week before the election.

If voters in key states are changing their minds and polls aren’t being conducted there, those electoral shifts won’t be reported. The polling profession learned this painful lesson in 1948’s “Dewey Defeats Truman” election when Gallup and others stopped polling weeks before election day. You’d think pollsters in key states would poll up until the day before the election, but polling isn’t free. Without investments in late polling in “blue wall” states where the thought was “Clinton has it in the bag,” a small, but important, blind spot developed.

The larger problem, as Nate Silver argues, is that the media in the 2016 Presidential Election didn’t much care about the polling data fine print when constructing the sweeping horse race narratives that congealed into popular political wisdom. Margin of sampling error was ignored in favor of swift headlines, and nuanced narratives that could have faithfully reflected polling data results were left unwritten. Polling data is sometimes only as good as the frame in which it is hung.

The correct media narrative really should have been along the lines of “it’s gonna be a close one!” And “there’s a serious chance Clinton could lose.” If the gap between the candidates had been wider, then a hedging narrative wouldn’t have been necessary. But the gap wasn’t wide, and a stricter reading of the available polls that took the margin of error into consideration would have produced a different media narrative.

Which brings us to the next round of contested polls: Trump’s transition approval rating in January 2017. Combined national polls had Trump at an average of 41% approve and 52% disapprove. In this case, the polls spoke strongly over and above margin of error. Americans generally were not impressed with the Trump transition. But the danger of too easily glossing polling data, and ignoring true error margins, is that once you’ve thrown out the fine print in favor of flashy generalizations, poll interpretation is subject to contestation and thereby “up for grabs.” Hence, Trump’s tweet about his transition approval ratings:

Of course, Trump’s transition approval ratings aren’t rigged and the election polls weren’t phony. We should instead expect such assertions to challenge underlying assumptions, not cavalier ad hominem attacks against pollsters that intend to deflect attention away from one’s own political shortcomings.

The reality is that sound scientific public opinion polling helps guide politicians by providing objective data upon which they can base decisions. It’s paramount to safeguard the practice of scientific polling, lest dishonest representations of public opinion run rampant and undermine the research that has informed the American political process. False narratives can only crowd out sound scientific data if political and media expediency is favored over the complexities of objective reality. And reality, as the saying goes, has a way of catching up.

This entry was posted in Campaigns, GOP, Polls, Trump. Bookmark the permalink.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives

Featured Research Download

Are you targeting the "persuadables" of your campaign? Download our seven step process to winning an election.

Download