Polling trends have you worried? Don’t be.

What, me worry? Maybe not so much.

I’m writing one week before “Election Day”, where I use the quotes to acknowledge that early voting here in Arizona has been going on for almost 3 weeks. Most of my friends and like-minded acquaintances are pretty freaked out. In our ideological bubble, we are repelled by the possibility of a second Trump presidency, one that could be far worse than the first. We fear that the pundits are right in assuming that the Orange One would have fewer guard rails, and that the nation might be plunged into dictatorship and worse.

I’m not going to dispute those prospects. Like everyone else, I can’t know what the country would really be like with Trump at the helm again; to paraphrase what physicist Niels Bohr supposedly said, “It’s hard to predict, especially the future.” But The Former Guy only cares about himself, and even when his plans could accidentally help our country, he lacks the ability to effectively carry them.

Yes, we are at a serious and dangerous moment in our politics. But perhaps we can reduce one source of our anxiety: our overamped attention to polls.

Polling is complex

Most polls (and polling averages) show an extremely close Presidential race. We often hear that, in the swing states like Arizona that will determine the Electoral College outcome, the percentage gap between the candidates is “within the margin of error.” And yet, when a poll (or a polling average) shifts by a percent or less one way or the other, pundits often refer to a significant trend. How sensible is this?

This topic can be quite complex, but here, I just want to outline some fairly simple things about modern polling. They suggest we shouldn’t take this too seriously.

First, what is the “margin of error?” It is a bare bones statistical measure. It doesn’t reflect any deeper factors, like shifts in the electorate, cross-party voters, or anything else – it simply describes a commonly used calculation showing the range of values that could arise from random causes, under some simple statistical assumptions.[1] You can come pretty close by just using the fraction that is 1 over the square root of the sample size. For instance, for 400 samples (commonly used for our state-level surveys), that would be 1 over 20 or a 5% margin of error – that’s actually plus or minus 5%.

So, if someone says the race has shifted by 1 or 2%, even just using this simple measure, it may only be a random shift.

But it’s more than statistical error

There are many other factors that can make a poll much more inaccurate than this suggests; and the typical errors for late polling averages in previous elections bear this out. Simple averages like the one used by Real Clear Politics include many partisan polls that may be released with ulterior motives, attempting to make a candidate come out much better or worse than the actual voters will rate them. And as many have pointed out, even a carefully curated polling average is still only a snapshot and can only be based on what respondents are saying at the time, which might be quite different than how they actually vote on their private and anonymous ballot.

There is another factor that particularly concerns me: the weighting of the raw data based on the assumed distribution of likely voters. These days, the vast majority of those surveyed do not respond. This means that the actual distribution of respondents will probably not be a match to the categories of people that a pollster thinks will actually vote. These include gender, race, education level, and age. Consequently, in a “likely voters” poll, the data is weighted to reflect the anticipated proportions for those categories. For instance, if there are proportionately fewer women in the sample than what we expect to be voting, the weighting would increase the value of each female vote.

But we don’t really know the proportions for the people who will actually vote. In a particular competitive state, we could assume that 50% of the voters will be college educated, but the fraction in the actual electorate might be 40% or 60%. And this is true for all of the other categories that analyses of previous elections have found to be important.

If polls don’t predict, what can we do?

The bottom line is that, when it comes to predicting the outcome of close elections, the polls really give us little insight, except to say that they might be close. 

This perspective can’t help us with the anxiety that comes from fearing the potential success of a misogynistic,  racist, con man; but it does suggest that there is no point in obsessing over apparent shifts in the polls that seem to make this success more likely in one state or another.

As Simon Rosenberg often suggests, the best response to this concern is to work. Make sure that your less-involved friends and family vote. Volunteer to help campaigns, not just the presidential, but down ballot campaigns that can often increase turnout. Don’t dive down the rabbit hole of polls and forecasts. The pollsters and poll aggregators don’t know what will happen this cycle any more than you do.


[1] For those who care, typically the distribution of this random variable is assumed to be a normal (Gaussian or bell curve) approximation to a binomial distribution. Happy now?


Discover more from Blog for Arizona

Subscribe to get the latest posts sent to your email.

1 thought on “Polling trends have you worried? Don’t be.”

  1. Excellent piece. Thanks!
    It may also be helpful to point out that for the Margin-of-Error, to be correctly interpreted, it is assumed that the sampling is truly random and that all the samples come from the same (i.e. static) population. These conditions are unlikely to be satisfied but nevertheless polls still give us some insight—just be cautious.

    Reply

Leave a Comment

Discover more from Blog for Arizona

Subscribe now to keep reading and get access to the full archive.

Continue reading