Why Nate Silver and others predicted the election perfectly


This chart by Rafa Irizarry at Simply Statistics pretty much sums up the amount of egg on the faces of anyone who questioned Nate Silver’s prediction that President Obama had a greater than 90 percent chance of winning reelection on Tuesday night. By and large, you’ll notice, Silver’s predicted chances of victory in any given state also align nicely with the percentage vote the president received in each state. The bottom line: True data analysis doesn’t care about politics, it cares about being correct.

It’s worth mentioning that Silver wasn’t the only statistician to perfectly predict the presidential race, either. In terms of Electoral College votes, Simon Jackman of Pollster did so, as did Josh Putnam of Davidson University. Save for Florida, Sam Wang of the Princeton Election Consortium fared very well, too, and actually nailed the popular vote split. Slate has a nice interactive chart showing how various statisticians and pundits fared in their predictions; there certainly are more predictions and models floating around that haven’t been included.

The important takeaway, however, is that the people who nailed the outcome didn’t achieve their results by cherry-picking data that served their political interests. They did it because they’re professional statisticians whose success depends on accurately predicting the outcomes of events, not on cheerleading for the outcome they might personally desire or that will drive the highest ratings. Even if the data they’re working with is somewhat biased — as some individuals and organizations suggested to me is the case — the science comes in being able to take the data sources for what they are and accurately weigh their relevancy.

In business, this is the shift in thinking that’s driving the movement toward big data and advanced analytics. Forward-thinking companies want to use data to make the right decisions, not to back up their predetermined decisions based largely on gut instinct. But there’s an unprecedented amount of data at their disposal — some good, some bad — which is why data scientists who can figure out what sources to use and how to use them are in such high demand right now.

So in 2014 and and 2016, pollsters are going to keep polling, statisticians are going to keep analyzing those polls (and whatever other factors they choose to include) and, maybe, pundits and the media will pay some attention to what they’re saying. Probabilities aren’t promises etched in stone, and a vote either way can change the face of close elections like this one. But no one should be surprised when someone whose only job is to get it right does just that.

Feature image courtesy of Flickr user Carolyn Coles.



I predicted the outcome with 100% accuracy myself. Romney wasn’t “supposed” to win, and never intended to win. It’s just one of those things you don’t see because you don’t want to see.


Ah. The pungent scent of sour grapes. This is not about who won and who lost, but about the nothing-short-of-amazing abilities of modern analytics and those who know its subtleties. *You* get over it.

Aswath Rao

It should be noted that Nate Silver also cherry picks his polls. After all he places weights on the polls and he has some other “secret sauces”, whatever they may be. Analysis of such data analytics should also note his wrong calls, such as Senate races in Montana and ND.


Sure, but he also reports the probability of his predictions — he *should* get some of his calls wrong because he tells us what his confidence in those predictions are. And then we can judge if he has a good grasp on the reliability of his predictions — is he over- or under-confident in his predictive ability?


Sure, but he also tells us the probability of his predictions, which allows us to assess if he is over- or under-confident in his own predictive ability. E.g., we never hear a pundit say, I’m 60% confident in my predictions, so 4 out of 10 of my predictions will be wrong.

Aswath Rao

He was 98% confident of his ND Senate seat. So should I chalk it up as a failure or consider it as part of the 2% event? What about Montana race? How many strikes will be required before reliability can be questioned?

Comments are closed.