October 24, 2008

By Jay Cost

### A Note on the Polls

I've received several emails from people asking about the polls. The national polls do seem pretty variable, so I thought I would toss in my two cents on them.

First, we need a short primer on basic statistics. Real Clear Politics offers an unweighted average, or mean, of the polls. As long as there is more than one poll in the average, we can also calculate the standard deviation, which is one of the most important concepts in inferential statistics. The standard deviation simply tells us how much the polls are disagreeing with one another.

For instance, suppose we are testing the strength of Candidate A. We have 32 polls, which we can arrange graphically in what is called a histogram. Our horizontal axis shows the electoral strength of Candidate A. Our vertical axis shows how many polls we found with Candidate A pulling in that much of the vote. Let's say it looks like this:

The average is 50%. The standard deviation is 1.6, which basically implies that the typical distance between a given poll and the average is 1.6. That's a pretty small number, and it squares with how concentrated the polls are around this average.

Now, suppose we have a distribution that looks like this.

We get the same average, 50%. However, this time the observations are more dispersed around it. Here, the standard deviation is 3.0. That's higher, and we can see why. The individual polls vary more with one another. That's what the standard deviation shows us - how much the polls vary around the average.

A final point to get us ready. We might examine the spread between the two candidates - Obama is up 7 versus up 1 or what have you. This is certainly a
valuable number to look at. Indeed, that's what we all care about! However, I am going to look at a candidate's share of the vote - not the spread.
Ultimately, our analysis is going to rely upon each poll's reported margin of error. Those numbers do not refer to the spreads, but to each
candidate's individual numbers. So, horse race polls actually have two margins of error - one for each candidate. Because the spread is the difference
between them, it will be *more variable* than either candidate's individual numbers.

With this stuff in mind, let's focus on some hard numbers. As of this writing, Barack Obama's share of the vote in the RCP average is 50.3%. His
standard deviation is 2.7. For McCain, whose average is 42.5%, the standard deviation is 2.3. For comparative purposes, I looked at the polls RCP was using
from its 2004 averages. For roughly the same time in that cycle (10/17/04 to 10/24/04) Bush's standard deviation was 1.8; Kerry's was 1.7. This means
that there is more disagreement among pollsters now than there was in 2004.

We can push this analysis further if we examine the distribution of each candidate's poll position. We'll first create a histogram of Obama's polling.

As we can see, most of the values cluster around the 49-54 range. However, there is a "tail" on the left-hand side. That's called a negative skew. That's a bit surprising. It's different from what we had in our stylized pictures for Candidate A.

Now, let's examine distribution of McCain's support.

There's no tail here, but the picture is still somewhat surprising. They are spread out fairly evenly across a broad range of values, with little clustering in the center.

Of course, a visual inspection can only take us so far. When we have only a few observations - and here we "only" have 15 - the true shape of the picture might not be clear. If we were to add another 5 or so polls, we might see something more like those stylized pictures presented above.

So, let's push the analysis a little bit further by looking at specific polls. We can test to see if the polls are separated from the average by a statistically significant amount. Again, since we're dealing with each candidate's individual poll positions - we'll test each candidate's number in an individual poll against the RCP average. To make sure we dot all our "i's" and cross all our "t's," we'll supplement the RCP average with a weighted average of the polls, which takes into account the number of observations when averaging the polls together.

Of the fifteen polls in the RCP average, four fall significantly outside the average for Obama and five do so for McCain. Meanwhile, three polls are right at the boundary of significance (one for Obama, two for McCain). The rules of statistics being what they are, we should expect a few polls here or there to fall outside the average by a statistically significant amount. But this is a lot. 40% of all our tests produced results around or outside the acceptable range.

So, we have made three observations: (a) relative to 2004, the standard deviation for Obama and McCain's polls are high, indicating more disagreement among pollsters at a similar point in this cycle; (b) the shape of the distribution of each candidate's poll position is not what we might expect; (c) multiple polls are separated from the RCP average by statistically significant differences.

Combined, these considerations suggest that this variation cannot be chalked up to typical statistical "noise." Instead, it is more likely that pollsters are disagreeing with each other in their sampling methodologies. In other words, different pollsters have different "visions" of what the electorate will look like on November 4th, and these visions are affecting their results.

Think of it this way. Suppose there is a bag of 130 million red and blue marbles that all the pollsters are sampling from. One pollster will pull a sample of 750 marbles, another a sample of 2,500, and so on. Oftentimes, they are going to pull different results from the bag. One pollster might pull 53% blue, another might pull 52%, and so on. However, as long as they are all pulling marbles from the same bag, the results will probably not differ too wildly. And after enough time, the distribution of those pulls should look something like those idealized pictures of Candidate A.

However, what if each pollster had a slightly different bag s/he was pulling from? In that situation, we should find more divergent results. That's basically what I'm suggesting here - that the bags the pollsters are pulling from are different. That's producing some of these larger-than-expected variations.

Now, I want to be clear: I am not making any claims about which pollster has the better sample of the electorate. I'm not singling anybody out for being right or wrong because frankly I do not know. I'm just pointing out that there seems to be disagreements among them that cannot be explained by random variation.

Importantly, there is one thing that the polls do not disagree on, the fact that Obama has a lead. All the polls show that. Also, we might begin to see convergence here soon. If pollsters have different methods for predicting what the electorate will look like, those methods might produce similar-looking "electorates" by the time we get to Election Day. At least for now, though, there is disagreement - not about who has the lead, but about how big that lead is.