Monday, February 25, 2008

Poll Averages

So, apparently, the Wall Street Journal has decided to horn in on my turf with an article about how simple averages of polling data can have misleading results. From the article:

Polls have different sample sizes, yet in the composite, those with more respondents are weighted the same.

Regular readers of this blog already know full well that you can't simply combine two polls together as if both had the same accuracy, and that a better (though still not perfect) method is to take account of the varying sample sizes.

The article goes on to mention several other problems with averaging poll results:

They are fielded at different times, some before respondents have absorbed the results from other states' primaries. They cover different populations, especially during primaries when turnout is traditionally lower. It's expensive to reach the target number of likely voters, so some pollsters apply looser screens. Also, pollsters apply different weights to adjust for voters they've missed. And wording of questions can differ, which makes it especially tricky to count undecided voters. Even identifying these differences isn't easy, as some of the included polls aren't adequately footnoted.

These are all good points. There is no question that using a simple average to combine poll results isn't particularly precise. Even my preferred method suffers from most of the drawbacks described above. Nevertheless, good public opinion polls offer us a reasonably accurate snapshot of the state of an election, and several polls give us several snapshots. Of course these snapshots are taken from slightly different angles and with somewhat different lenses and the subject is a moving target. So we can choose to either look at each photo as if it is totally independent from the others, or we can do our best to put them all together into a more coherent picture, even thought they don't fit perfectly.

Obviously, I choose the latter, caveats and all.

No comments: