Some academic thoughts on the poll aggregators

November 8, 2012

(This article was first published on Simply Statistics, and kindly contributed to R-bloggers)

The night of the presidential elections I wrote a post celebrating the victory of data over punditry. I was motivated by the personal attacks made against Nate Silver by pundits that do not understand Statistics. The post generated a little bit of (justified) nerdrage (see comment section). So here I clarify a couple of things not as a member of Nate Silver’s fan club (my mancrush started with PECOTA not fivethirtyeight) but as an applied statistician.

The main reason fivethrityeight predicts election results so well is mainly due to the idea of averaging polls. This idea was around way before fivethirtyeight started. In fact, it’s a version of meta-analysis which has been around for hundreds of years and is commonly used to improve results of clinical trials. This election cycle several groups,  including Sam Wang (Princeton Election Consortium), Simon Jackman (pollster), and Drew Linzer (VOTAMATIC), predicted the election perfectly using this trick. 

While each group adds their own set of bells and whistles, most of the gains come from the aggregation of polls and understanding the concept of a standard error. Note that while each individual poll may be a bit biased, historical data shows that these biases average out to 0. So by taking the average you obtain a close to unbiased estimate. Because there are so many pollsters, each one conducting several polls, you can also estimate the standard error of your estimate pretty well (empirically rather than theoretically).  I include a plot below that provides evidence that bias is not an issue and that standard errors are well estimated. The dash line is at +/- 2 standard erros based on the average (across all states) standard error reported by fivethirtyeight. Note that the variability is smaller for the battleground states where more polls were conducted (this is consistent with state-specific standard error reported by fivethirtyeight).

Finally, there is the issue of the use of the word “probability”. Obviously one can correctly state that there is a 90% chance of observing event A and then have it not happen: Romney could have won and the aggregators still been “right”. Also frequentists complain when we talk about the probability of something that only will happen once? I actually don’t like getting into this philosophical discussion (Gelman has some thoughts worth reading) and I cut people who write for the masses some slack. If the aggregators consistently outperform the pundits in their predictions I have no problem with them using the word “probability” in their reports. I look forward to some of the post-election analysis of all this.

To leave a comment for the author, please follow the link and comment on their blog: Simply Statistics. offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...


Comments are closed.


Mango solutions

plotly webpage

dominolab webpage

Zero Inflated Models and Generalized Linear Mixed Models with R

Quantide: statistical consulting and training




CRC R books series

Six Sigma Online Training

Contact us if you wish to help support R-bloggers, and place your banner here.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)