“Polling industry, at least nationally, whiffed. They can’t even claim ‘margin of error’ this time.” — Text from neighbor and friend Dan Curran, 2:04pm, 11/4/20
My phone was blowing up on election night and for a few days afterwards with texts like the one above. I get it. Every presidential election is touted as the “most important of our lifetime,” at least until four years later. Who knows, maybe they were right this time. And people are right to take elections, and polling, seriously.
In fact, neither side in this 2020 general election is satisfied. People on the right who, if they are not convinced the election was stolen, they are likely shuddering at the prospect of Harris presidency. And many on the left, convinced the President is evil incarnate and anyone who voted for him is a racist, cannot believe the race was as close as it was. There was no grand repudiation of Trump and Trumpism. Both sides point a wagging finger at supposedly errant polling.
What really happened? I’ve poured over multiple articles, attended at least three expert webinars — one with Becca Siegel, the Biden campaign’s Chief of Analytics and another with Joe Lenski who conducted the exit polling — reviewed spirited discussions on the American Association for Public Opinion Research’s listserv and personally spoke with the people in the business of campaigns and polling. Essentially, I’ve done a lot of work so you don’t have to, though you’re welcome to do your own and come to your own conclusions. Here’s what I see.
The Real Clear Politics polling average gave Biden/Harris a 7.2% lead over Trump/Pence among national surveys conducted during the election’s final week. Biden’s actual margin of victory is 4.5%, 81,283,098 votes for Biden to 74,222,958 votes for Trump. Some of the late polls were closer to the mark than others. The final Investor’s Business Daily was dead-on at +4% Biden. On the extremes, the notoriously pro-Trump Rasmussen had the race +1% Biden, while Quinnipiac University had Biden at +11%. On average and despite this variation between pollsters, the national polls were accurate. In fact, every single reputable national poll got the presidential winner right.
Some will point to the 2.7% difference as “an error.” But if that’s your position, you’re demanding far too much from public opinion polling. Temporarily setting aside the margin of sampling error, the fact is that November’s election (which was more of an October election with all the early voting) was, like everything in 2020, abnormal:
• With more than 158 million people voting, there was a massive 20 million-voter increase in turnout over 2016
• More than two-thirds of the vote in 2020 was cast before election day compared to 42% in 2016
• There were huge partisan differences in the voting method used
Pollsters needed to estimate three different electorates — mail voters, early in-person voters, and poll voters.
All these factors ratchet up the challenge for snapshot-in-time polls, which all polls are.
More importantly, polls are — and always have been — imprecise instruments. One great thing about well-conducted polls is not that they contain no error, it’s that you know the amount of expected error. So the take in my friend’s text is wrong: the polling, on average, was well within the margin of sampling error. On top of that, with furious campaigning up until the very end, people can change their minds and some people who say they are going to vote don’t actually follow through on that.
In any event, when every major pollster has the right ticket winning and the average difference from the true result is less than 3%, that is not a “whiff.”
Longer than usual delays in reporting results and large swings in the results over that time produced fertile ground for “hot takes” that eventually flamed out. The quote above is a case in point. The day after the election there were still 20 million ballots to be counted. Most of those ballots were cast in urban areas, regions where the makeup of the electorate was substantially different from the electorate that had already been counted. Further, some states still had to process huge numbers of remaining mail-in ballots — as is the case in California — and the makeup of those electorates was substantially different from the electorate that had already been counted. If you went to bed at 11 o’clock, even 11 o’clock Wednesday morning, you did not have the whole picture.
And adding to the confusion, “percent precincts reporting,” became meaningless. In yesteryear, when most votes were cast in-person at polling places in precincts, media reports of the percentage of precincts reporting their vote helped us understand how far along we were in the election process. In this brave new 2020 world where mail-in ballots are ubiquitous, this stat has no meaning and makes things more confusing.
Now, as they had in 2016, 2020’s polls tended to underestimate support for President Trump. Good pollsters will again obsess about that underestimation. They will put a lot of effort into understanding it. Here are some theories that have been advanced:
• COVID kept more Democrats at home and near their phones leading to their greater inclusion in polls
• Telephone samples may undercount small towns and rural areas
• Voters who already voted were less willing to tell an interviewer who they voted for and with 2020’s far larger proportion early voting, this caused problems for pollsters
• A group of voters distrust the media and polling and are unwilling to participate in surveys, “shy Trump” voters in this case
Let’s address each of these. First, legitimate pollsters should not be relying on only the phone to conduct surveys. By this point in history, I think most pollsters do what Competitive Edge does: take a multi-mode approach to data collection. That means, in addition to calling registered voters who have landlines and cell phones, texting and e-mailing surveys to them. Yes, this is MUCH more difficult to execute properly, but a multi-mode approach is the only way to ensure complete coverage of the electorate. Taking such an approach addresses the first two bullets.
As for respondents not wanting to reveal who they voted for, this is only a problem if interviewers are poorly trained. I can see how university research operations can employ ill-trained students — who may have their own pro-Biden biases — to conduct surveys. I can also understand how schlocky operations can try to collect data using minimum wage workers. At Competitive Edge, we train our interviewers to deal with just this situation. We have special protocols to address reticence via affirming to the respondents that the survey is, in fact, 100% confidential. Good training and supervision take care of this issue.
Then there’s the shy Trump voter issue which has been around since before the 2016 election. We never saw such an effect in CERC’s polling. Prior to this election, we became aware of a study conducted in Florida which indicated that, in that state, there was a “shy Trump” effect in which some Trump supporters did not participate in a phone poll. This caused a 1% to 2% underestimation of Trump support in that poll. So there is some credence to the idea in 2020. However, again, Competitive Edge did not find this in its polling. For example, we did not find a significant difference in presidential vote choice between those responding via a web survey and those responding on the phone. That’s probably also because any such effect stems from poor interviewer training more than anything else. Well-trained professional interviewers put the respondent at ease to facilitate an accurate survey. In any event, the shy Trump effect appears to be small.
As an aside, the shy Trump respondents, to the extent that they are out there and causing polls to underestimate support for the President, only hurt themselves and their candidate. Consider that, if they participated, the polling results would have shown a tighter race – some may have even showed Trump winning. The Trump campaign would then have had more of a leg to stand on when claiming the election was stolen. As it is, those claims appear more preposterous because polling did not show an exceptionally close race.
There were some big misses in some states, and perhaps we’ll address those in a future edition of The Edge. Based on a review of national polling, however, the “demise of polling” narrative is once again premature. If organizations and people who need to know what voters and consumers think exercise their due diligence to hire a firm that knows what it’s doing, and the limitations of polling are kept in mind, those research consumers will be well-served.