Thursday, June 30, 2016

Freeing Your Mind from Stock Reports on Polling

Understanding the reported results of public opinion polls can be as simple as listening to and understanding the words uttered by newsreaders on television and radio. Whether taking newsreaders at the words they’ve read makes a good policy is a question each must decide for himself.

Given our culture and recent experiences with newsreader dishonesty, I prefer to go beneath the soundbite reporting to discover exactly what opinion polls actually tell us. To do that, however, requires more work than simply taking a news reader’s word. Given Katie Couric’s dishonesty regarding guns in her recent report on the Second Amendment, given Brian Williams’ misstatements about his wartime correspondent experiences, and given other examples, it makes little sense to simply take newsreaders at face value.

So what are we to do when the evening news readers tell us that one candidate or another has a lead in a race of one sort or another? What are we to do when that same newsreader tells us that some percentage or other of the population is more or less satisfied with the direction of the country, or with the performance of Congress, or of the President? What are we to do when newsreaders tell us that some percentage of the nation supports unrestricted access to abortion, decriminalization of marijuana use, or the like?

A wise man once observed that before a nation goes out to war, its leaders must evaluate its military strength and that of the opponent. If the nation is overmatched, then, that wise man suggested, the leaders of the nation find a way to make peace with their enemy rather than being destroyed.

That observation tells us that it is always wise – before acting – to investigate. For me, when it comes to public opinion polling, I try to do just that.

Here is a screenshot taken from RealClearPolitics:


This particular screenshot shows the results of recent polling on Presidential Approval, Congressional Approval, and Direction of the Country, what RealClearPolitics calls the "State of the Union." 

Each line summarizes the results of a poll. When you review the line summary, you learn:  the topic polled, the polling agency, the results, and the “spread.” On RealClearPolitics, the polling results are usually set up as hyperlinks. If you click on the name of the polling agency, you will be taken to a summary and report of the particular poll.

Once on the pollster’s result page, there are quite a few things of which you might want to take an accounting.

Factors that may assist you in making your own judgments about a poll include how results are summarized by the pollster, whether the pollster provides a complete reporting of results (from which such summaries are developed), and whether the pollster reports the methodology for the poll. While not found on such pages, you might also do some research on the reputation of the polling agency, including the reputation for accuracy and the reputation for factors that might indicate open or hidden biases.

What follows here are some screenshots showing how these factors can be located on the pollster’s report or web page.

First, the most typical link for each poll will be to a news release. Shown here is the news release from Public Policy Polling. The news release will include contact information for the news media, a summary of the results, highlights as selected by the pollster, the questions polled with results*, and an explanation of the methodology used in conducting the poll:

Second, you will find the questions polled, along with demographic breakdowns of the responses to the polls. How the demographics breakdown depend on the pollster and any client on whose behalf the polling was performed.  Shown here is the satisfaction question from the Quinnipiac Survey:

Third, you will typically find the summary of the methodology employed in getting the result. This information can help you filter the meaning of the results and understand why some pundits place more or less reliance on a poll. For example, if a survey takes responses from all Americans its reliability in predicting election outcomes is likely to be less than a survey of registered voters. In turn, a survey of “likely” voters will typically produce a more accurate picture than one of “registered” voters. Shown here is the methodology summary for the Economist’s Presidential Satisfaction survey:

















Finally, after absorbing the greater details available from the “horse’s mouth,” you might also do some background research about the horse. To illustrate why you might do so, I did a quite Bing search on Public Policy Polling. Here is a screenshot of one of the top results:

Now, I know PPP and have been aware for many years that they service a progressive, leftist agenda and clientele. But when I searched, I was fascinated by an article about PPP on the liberal New Republic website. Nate Cohen wrote the article, “There's Something Wrong With America's Premier Liberal Pollster: The problem with PPP's methodology,” back in September 2013.

The entire article is a fascinating insight into the possible sources of bias and confusion involving just one polling agency. Whether all pollsters suffer from fluke results from time to time, there is no doubt that agencies like Gallup and Rasmussen would prefer not to have said of them what Cohen concluded about PPP:
“To be fair, even the best pollsters aren’t perfectly transparent. Perhaps that’s especially so when constructing and tinkering with likely voter models.6 And it’s also possible that PPP would still be a decent pollster if it used a more defensible approach. But PPP’s opacity and flexibility goes too far. In employing amateurish weighting techniques, withholding controversial methodological details, or deleting questions to avoid scrutiny, the firm does not inspire trust. We need pollsters taking representative samples with a rigorous and dependable methodology. Unfortunately, that’s not PPP.”

When we were in school, and math teachers introduced us to the idea of probability and statistics, one example often used was the task of selecting a pair of matching socks out of a sock drawer in a darkened room. 

For a beginner's exercise, the task was always simplified: you own 10 pairs of matching black socks and 10 pairs of matching blue socks; you need a pair to wear for work, but don’t want to wake your spouse by turning on the bedroom light. What number of socks must you take from the drawer, what minimum number of socks, in order to be sure that you have a pair of matching socks? 

Of course, one sock is the wrong answer because you lack a pair, and two socks is the wrong answer because you might have one black and one blue sock. So, in that simple example, it turns out that you have to select 3 socks in order to be sure to have at least one matching pair.

Polling has something to do with this problem, but now, instead of just black and blue socks, you have to add in red, green, white, polka-dotted, and striped socks. With presidential polling, because elections really are determined state by state, you have to further complicate the problem by having 50 sock drawers and the task of selecting a matching pair from a sufficient number of drawers to insure that your feet have a truly presidential feel to them.

In deciding questions like how many socks do you have to pull out, we are actually in the same kind of business as those who design polls. 

For example, pollsters try to differentiate between categories of people, between all residents, those residents that have registered to vote, and those residents that have voted in recent previous elections. In addition, because “exit polls” allow pollsters to identify other factors, such as gender, race, party affiliation, age, and ethnicity, pollsters who are adjusting their results often take into account the proportions of likely voters, white voters (or black or Latin), and Republican (or Democrat) voters.


Gathering all the information you can will help you decide how much weight to put on a poll result that pretends America is just one great big sock drawer (rather than 50), consisting of about 31% Republican socks and 37% Democratic socks (the party affiliation of exit polled voters in a particular election), or the like. Smarter reading will lead to smarter understanding and liberate you from the spoon feeding of newsreaders. That kind of empowerment will not necessarily prove newsreaders wrong, nor will it prove polls correct. Rather, it will position you to make persuasive cases for the value (or lack of value) provided by particular polls.