The Polls Were Off in Virginia: Here’s Some Insight
“The polls were wildly off in the Virginia election.” That’s how one Vox article put it after Democrat Ralph Northam beat his opponent by 8.9 percent on the first Tuesday in November. In the headline, the article asserted, boldly, that “Pollsters missed Virginia by more than they missed Trump vs. Hillary.” As proof, a comparison was made to the national polls in the 2016 presidential race which were only off by 1%. Well, that’s true. Though you might have been confused by all the hand-wringing over President Trump’s surprising win, the national poll conducted by my friend Gary Langer for ABC News was highly accurate. But not so fast on the criticism of the Virginia polling in the Governor’s race. As so often happens, this is a case of some polls being better than others. And I’ll give you the secret on knowing the difference.
Here are the results (adjusted to remove undecideds) of the final publicly released polls showing the Northam’s lead in each poll:
+6 Christopher Newport University
+5 Washington Post
+4 Upshot NYTimes/Siena
+1 The Polling Company
+1 IMGE Insights
+1 Trafalgar Group
Hmm… some polls are more accurate than others. Now when I tell you why I grouped these polls the way I did you’ll understand what’s mostly driving the errors. The top group, which gave Northam an average lead of 4.4% and was therefore off the mark by less than 5%, are all unaligned organizations, either news outlets or universities. The bottom group, on the other hand, is comprised of partisans which just so happen to work for – you guessed it – Republican clients. That group of polls was far outside the margin of sampling error, being 8% off the final official result.
Why is Rasmussen all by itself? It’s a media poll, but it’s results have been shown to consistently favor Republican candidates and Scott Rasmussen is a Republican. It’s difficult to assign the Rasmussen poll to a category.
Now I’m not saying that certain pollsters put their “thumb on the scale” to appease their audience or clients. But pollsters must make assumptions about the composition of their samples, including how the samples are drawn, callback and disposition rules, and weighting. They can also have prejudices and biases which can influence their assumptions and decisions. Unless pollsters are scrupulously committed to accuracy, know what they are doing and are able to block out partisan leanings, their results can be shaded and lead to inaccuracy.