“I want us all to think of the deeper questions – like how do we teach kids that it’s OK to fail? And how to learn from those failures instead of curling up in a fetal position? How do we teach their parents that it’s okay to fail?” – Thurgood Marshall Middle School principal Michelle Irwin, on creating a culture of excellence in schools.
As a parent, I love this quote and the point Irwin is making. Because no one is perfect, if you (or your child) are not failing at least some of the time that means you’re not trying.
Even labelling human endeavors “failures” should be challenged. As a wise man once said, “You call it failure; I call it a knowledge-building exercise.” If you don’t fail at times, you’re also missing opportunities to learn as much as you can about yourself, your limits and the world around you.
This conception of failure has obvious, straightforward implications in the world of electoral politics. Abraham Lincoln is usually cited as a paragon of perseverance who learned from his trials and tribulations. The advice to run again after a loss is generally sound, as long as the candidate learns the lessons taught by the campaign’s hard knocks.
But “failure” is extremely important when it comes to conducting good survey research. No large survey is a complete success. If it is, then your pollster isn’t trying hard enough.
Let me explain. Any benchmark survey is embedded with dozens of hypotheses which we turn into questions. Whether these hypotheses come from focus groups, opposition research, or from the minds of the campaign’s inner circle, the best of them make it into the benchmark survey because its main goal should be to explain why opinions are the way they are.
Example: why do we ask about the importance of crime? It’s only partly to see how important crime is to voters, but it is far more critical to determine whether the importance of crime significantly relates to the electorate’s opinion of the candidate. Therefore, the question about crime is in the survey to test a hypothesis (whether the pollster states it like that or not).
The same thing goes for message testing. Usually the only good reason for inserting a message-testing question is because we think it might move voters. Voila, a hypothesis is being tested.
Now,not every hypothesis can be right. In any benchmark poll I’ll test 40 or more hypotheses, but rigorous analysis will show, at most, ten factors (questions) actually relate to voter opinion. So hypotheses “fail” roughly five times more often than they succeed.
Should we stop hypothesizing because of the failure rate? No. We should always work hard to come up with better questions, but we should never stop asking them. Failure is the natural residue of trying. It’s a good thing as long as we learn to learn from it.