how he predicted the 2012 election, gettting it 100% right

 

How He Got It Right

January 10, 2013

Andrew Hacker

Font Size: A A A

The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t
by Nate Silver
Penguin, 534 pp., $27.95                                                  

The Physics of Wall Street: A Brief History of Predicting the Unpredictable
by James Owen Weatherall
Houghton Mifflin Harcourt, 286 pp., $27.00                                                  

Antifragile: Things That Gain from Disorder
by Nassim Nicholas Taleb
Random House, 519 pp., $30.00                                                  

hacker_1-011013.jpg

Statistician Nate Silver, who correctly predicted the winner of all fifty states and the District of Columbia in the 2012 presidential election

1.

Nate Silver called every state correctly in the last presidential race, and was wrong about only one in 2008. In 2012 he predicted Obama’s total of the popular vote within one tenth of a percent of the actual figure. His powers of prediction seemed uncanny. In his early and sustained prediction of an Obama victory, he was ahead of most polling organizations and my fellow political scientists. But buyers of his book, The Signal and the Noise, now a deserved best seller, may be in for something of a surprise. There’s only a short chapter on predicting elections, briefer than ones on baseball, weather, and chess. In fact, he’s written a serious treatise about the craft of prediction—without academic mathematics—cheerily aimed at lay readers. Silver’s coverage is polymathic, ranging from poker and earthquakes to climate change and terrorism.

We learn that while more statistics per capita are collected for baseball than perhaps any other human activity, seasoned scouts still surpass algorithms in predicting the performance of players. Since poker depends as much on luck as on skill, professionals make a living by having well-heeled amateurs at the table. The lesson from a long chapter on earthquakes is that while we’re good at measuring them, they’re “not really predictable at all.” Much the same caution holds for economists, whose forecasts of next year’s growth are seldom correct. Their models may be elegant, Silver says, but “their raw data isn’t much good.”

The most striking success has been in forecasting where hurricanes will hit. Over the last twenty-five years, the ability to pinpoint landfalls has increased twelvefold. At the same time, Silver says, newscasts purposely overpredict rain, since they know their listeners will be grateful when they find they don’t need umbrellas. While he doesn’t dismiss “highly mathematical and data-driven techniques,” he cautions climate modelers not to give out precise changes in temperature and ocean levels. He tells of attending a conference on terrorism at which a Coca-Cola marketing executive and a dating service consultant were asked for hints on how to identify suicide bombers.

Much is made of ours being an era of Big Data. Silver passes on an estimate from IBM that 2.5 quintillion (that’s seventeen zeros) new bytes (sequences of eight binary digits that each encode a single character of text in a computer) of data are being created every day, representing everything from the brand of toothpaste you bought yesterday to your location when you called a friend this morning. Such information can be put together to fashion personal profiles, which Amazon and Google are already doing in order to target advertisements more accurately. Obama’s tech-savvy workers did something similar, notably in identifying voters who needed extra prompting to go to the polls.1

Those daily quintillions are what led to Silver’s title. “Signals” are facts we want and need, such as those that will help us detect incipient shoe bombers. “Noise” is everything else, usually extraneous information that impedes or misleads our search for signals. Silver makes the failure to forecast September 11 a telling example.

But first, The Signal and the Noise is in large part a homage to Thomas Bayes (1701–1761), a long-neglected statistical scholar, especially by the university departments concerned with statistical methods. The Bayesian approach to probability is essentially simple: start by approximating the odds of something happening, then alter that figure as more findings come in. So it’s wholly empirical, rather than building edifices of equations.2 Silver has a diverting example on whether your spouse may be cheating. You might start with an out-of-the-air 4 percent likelihood. But a strange undergarment could raise it to 50 percent, after which the game’s afoot. This has importance, Silver suggests, because officials charged with anticipating terrorist acts had not conjured a Bayesian “prior” about the possible use of airplanes.

Silver is prepared to say, “We had some reason to think that an attack on the scale of September 11 was possible.” His Bayseian “prior” is that airplanes were targeted in the cases of an Air India flight in 1985 and Pan Am’s over Lockerbie three years later, albeit using secreted bombs, plus in later attempts that didn’t succeed. At the least, a chart with, say, a 4 percent likelihood of an attack should have been on someone’s wall. Granted, what comes in as intelligence is largely “noise.” (Most intercepted conversations are about plans for dinner.) Still, in the summer of 2001, staff members at a Minnesota flight school told FBI agents of a Moroccan-born student who wanted to learn to pilot a Boeing 747 in midair, skipping lessons on taking off and landing. Some FBI agents took the threat of Zacarias Moussaoui seriously, but several requests for search and wiretap warrants were denied. In fact, an instructor added that a fuel-laden plane could make a horrific weapon. At the least, these “signals” should have raised the probability of an attack using an airplane, say, to 15 percent, prompting visits to other flight schools.

Silver’s “mathematics of terrorism” may be stretching the odds a bit. Many of those daily quintillion digits flow into the FBI and CIA, not to mention the departments of State and Defense. To follow all of them up is patently impossible, with only a small fraction getting even a cursory second look. It’s bemusing that two recent revelations of marital infidelity—Eliot Spitzer and David Petraeus—arose from inquiries having other purposes. Plus there’s the question of how many investigators and investigations we want to have, as more searching will inevitably touch more of us.

Yet in the end, Silver’s claims are quite modest. Indeed, he could have well phrased his subtitle “why most predictions fail.” It’s simply because “the volume of information is increasing exponentially.”

There is no reason to conclude that the affairs of man are becoming more predictable. The opposite may well be true. The same sciences that uncover the laws of nature are making the organization of society more complex.

I’d only add that it’s not just what sciences are finding that makes the world seem more complex. Shifts in the structure of occupations, abetted by more college degrees, have increased the number of positions deemed to be professional. If entrepreneurs tend to be assessed by how much money they amass, professionals are rated by the presumed complexity of what they know and do. So to retain or raise an occupation’s status, tasks are made more mysterious, usually by taking what’s really simple and adding obfuscating layers. The very sciences Silver cites—especially those of a social sort—rank among the culprits.

2.

Nate Silver is known not so much for predicting who will win elections, but for how close he comes to the actual results. His final 2012 forecast gave Obama 50.8 percent of the popular vote, almost identical with his eventual figure of 50.9 percent. This kind of precision is striking. A more typical projection may warn that it has a three-point margin of error either way, meaning a candidate accorded 52 percent could end anywhere between 55 percent and 49 percent. Or, fearful of making a wrong call, as in 2000, polling agencies will claim that the outcome is too close to foretell. Still, it’s too early to hail a new statistical science. As can be seen in Table A, Rasmussen’s and Gallup’s final polls predicted that Romney would be the winner, while the Boston Herald gave its state’s senate race to Scott Brown.

Hacker_tablesA-011013

In fact, I am impressed when polls come even close. To start, what’s needed is a reliable cross-section of people who will actually vote. In 2008, only 62 percent of eligible citizens cast ballots. In 2012, even fewer did. Not surprisingly, some people who seldom or never vote will still claim they’ll be turning out. Testing them (“can you tell me where your polling place is?”) can be time-consuming and expensive. And there are those who don’t report their real choices. But much more vexing is finding people willing to cooperate. According to a recent Pew Research Center report, only several years ago, in 1997, about 90 percent of a desired sample could be reached in person or at home by telephone, and 36 percent of them were amenable to an interview.

Today, with fewer people at home or picking up calls, and increasing refusals from those who do, the rates are down to 62 percent and 9 percent.3 So the polls must create a model of an electorate from the slender slice willing to give them time. Yet despite these hurdles, the Columbus Dispatch called Ohio’s result perfectly, using 1,501 respondents from the state’s 5,362,236 voters (the figures available on December 7).

Election polls are unique in at least two ways. First, they aim to tell us about a concrete act—a cast ballot—to be performed in an impending period of time. (Each year, more of us vote early.) It’s hard to think of other surveys that try to anticipate what a huge pool of adults will do. Second, how well a poll did becomes known once the votes are counted. So we find Nate Silver got it right and Rasmussen and Gallup didn’t. But a poll’s accuracy is only a historic curiosity after the returns are in. That is, it didn’t tell us anything lasting; just about a foray into forecasting during some months when a lot of us were wondering how events would turn out. Other polls tell us about something less fleeting: the opinions people hold on public issues and personal matters.

But with polls on opinions—military spending, say, or the provision of contraceptives—there’s seldom a subsequent vote that can validate findings. (To an extent, this is possible when there are statewide votes on issues like affirmative action and gay marriage.) A recourse is to compare a series of surveys that ask similar questions.

Yet as Table B shows, responses on abortion have been quite varied. What could be called the “pro-choice” side ranges across twenty-three percentage points. Certainly, how the question is phrased can skew the answers. CBS’s 42 percent agreed that abortion should be “generally available,” while Gallup’s 25 percent were supporting the view that abortion should be “always legal,” and The Washington Post’s 19 percent were for abortion to be “legal in all cases.” The short answer is that apart from the severe anti side, polling can’t give us specific figures on where most adults line up on abortion. Or, for that matter, any issue.

Hacker_tablesB-011013

What goes on in the American mind remains a mystery that sampling is unlikely to unlock. In my estimate, the 65,075,450 people who chose Barack Obama and Joseph Biden over Mitt Romney and Paul Ryan were mainly expressing a moral mood, a feeling about the kind of country they want. I’d like to see Nate Silver using his statistical talents to explore such surmises.

We’ve been informed that 55 percent of women supported Obama, rising to 67 percent of those who are single, divorced, or widowed. Obama also secured 55 percent among holders of postgraduate degrees, and 69 percent of Jewish voters. But how can we know? Voting forms don’t ask for marital status or religion. The answer is that these and similar figures were extrapolated from a national sample of 26,563 voters, approached just after they cast their ballots or telephoned later in the day, by an organization called Edison Research.

The figures I’ve cited and others on the list look plausible to me. Still, there’s no way to check them; moreover, the Edison survey is the only post-election one that was done. So here’s a caveat: Jews are so small a fraction of the electorate that there were only 241 in the sample. Thus the abovementioned 69 percent comes with a seven-point margin of error either way, a caveat not noted in most media accounts.

Nate Silver doesn’t conduct his own polls. Rather, he collects a host of state and national reports, and enters them in a database of his own devising. Combining samples from varied surveys gives him a much larger pool of respondents and the potential for a more reliable profile. Of course, Silver doesn’t simply crunch whatever comes in. He factors in past predictions and looks for slipshod work, as when the Florida Times-Union on election eve gave the state to Romney, based on 681 interviews. He pays special attention to demographic shifts, such as a surge in registrations with Hispanic names. His model also draws on the Cook Political Report, which actually meets informally with candidates to assess their electoral appeal. In September, Silver set the odds of Obama’s winning at 85 percent, enough to withstand a dismal performance in the first debate, which hadn’t yet occurred.

3.

Early in The Signal and the Noise, Silver alludes to Isaiah Berlin’s trope about hedgehogs and foxes. Hedgehogs know “one big thing,” while foxes know “many little things.” But there’s more. Hedgehogs display a disconcerting certainty that their one idea will put everything straight, whether on intellectual questions or in the working world. Moreover, Berlin warned, hedgehogs can cause a lot of damage when their nostrums are applied. Silver sees himself as a more modest fox, willing to draw on varied approaches to get his job done. So The Signal and the Noise doesn’t end with a crescendo, but actually stresses the quite limited ambit of our power to predict.

James Weatherall is an unabashed hedgehog, propounding a single idea with an uncommon confidence. After training in physics, philosophy, and mathematics, he now teaches logic and the philosophy of science at the University of California’s Irvine campus. With The Physics of Wall Street, he is taking his training even further: to finance in its preeminent location.

He is a man with a mission: to bring a heightened rationality to investment decisions. His book opens with an admiring visit to a hedge fund where a third of the employees have doctorates in physics, mathematics, statistics, even astronomy. Indeed, their rarefied insights are what’s wanted; “PhDs in finance need not apply.” In fact, there’s a niche Wall Street sector called “quant firms,” which reserve key positions for holders of advanced degrees.

Weatherall would have this perspective pervade the entire financial industry. He succinctly states his one big idea: “Insights that are commonplace in physics…are useful in studying virtually anything.” In one sense, Wall Street’s products have a physical character. Collateralized debt obligations, credit default swaps, and initial public offerings appear on paper or as electronic impulses. But Weatherall means more than this. Markets, he believes, are subject to physical laws. His star witness is Louis Bachelier, a French mathematician at the turn of the last century, who used Brownian motion to evaluate stock options. After that, we hear how ideas from such mathematicians as Jacob Bernoulli and Benoît Mandelbrot can be applied to mitigating risks and minimizing uncertainty. Not least of their influence has been to entrench mathematics in MBA programs, Wall Street’s principal recruiting pool.

But the book is less a celebration of the past than a prospectus for the future. Weatherall anticipates “a breakthrough in our ability to identify the underlying chaotic patterns lurking in market data,” that is, to impose order on the “noise” that bedevils Silver. In a similar vein, he foresees strides “in predicting financial calamity using mathematical techniques,” perhaps like Russia’s default in 1998, which some Nobel economists didn’t see coming. He also hopes that “studies of psychology and human behavior” can be framed so that they are “symbiotic with mathematical approaches to economics.” Here he foresees doing better than Newton, who confessed, “I can calculate the movements of stars, but not the madness of man.” Almost everyone favors rigor and unlocking more mysteries. That’s why most of us support science. Can there be anything amiss in feeling optimistic about crossing new frontiers?

What isn’t explained is whether melding physics with finance will bring more benefits for everyone, or only give an advantage to those who use the techniques—like high-speed trading, if you own a mainframe computer. We can agree there’s a lot of irrationality—not to say exuberance—in the investment world. But it’s not clear if Weatherall is saying that decisions based on Bernoulli will allocate capital more efficiently, and hence serve the commonweal. When physics enters medicine, as with MRIs, we can have a reasonable hope that patients will benefit as much as their doctors. But when quants were riding high on Wall Street, they were hired only to give their own firms an edge over the competition.

Alluding to the collapse of Bear Stearns and Lehman Brothers, the housing bubble, and the October 2008 crash, Weatherall concedes that “the misuse of mathematical models played a role in this crisis.” Still, his implication here is that the models themselves didn’t contribute to the downfall; it was that they were somehow mishandled. The ultimate problem with hedgehogs is hubris. In this case, it’s the assumption that the quality of our thought can be enhanced by new methodologies. The word “sophistication” recurs on almost every page of The Physics of Wall Street, as if to affirm that higher powers are present.

True, the discovery of the calculus enabled us to build planes that travel faster than the speed of sound. But thus far I’ve found scant evidence that mathematics and physics have a capacity to give us a deeper understanding of human and social behavior. Of course, we should be open to new findings. Still, that differs from proclaiming that “what we do know for sure is that there will be a next major advance, and…we will understand markets more clearly than we do today.” Here he seems to be saying that the physical sciences can tell us how to avoid the crashes and crises we now periodically face. If that’s so, I wish Weatherall had listed some warnings of coming “calamities” based on his physical laws, such as the impending college loan bubble: When will it burst, and how widespread will the fallout be?

Nassim Nicholas Taleb in Antifragile calls such certainty “the error of naive rationalism.” And it’s a naiveté with consequences. There’s no doubt that the quants who bundled bad mortages, adding algorithms to rate them AAA, helped to give us the current recession.4 But just as culpable were their nonmathematical superiors who allowed them such rein. So there’s a broader issue. Financial firms want to be at the cutting edge, which now means having a bevy of Ph.D.s, just as at other times and places, enterprises might feel they should have an accredited gypsy with tarot cards. We are coming close to deifying anything smacking of STEM—science, technology, engineering, and mathematics—whether for staying ahead of China or cutting-edge careers for our young people. If firms need workers adept in algebra, then such instruction should be available. But to rely on physics and mathematics for deciphering human behavior, in markets or elsewhere, can only bring blind alleys. Less of the world than we might like is, as Taleb puts it, “academizable, rationalizable, formalizable, theoretizable.” When such rubrics crowd out more discursive thinking, we all lose.

Letters

'The Physics of Wall St.' February 7, 2013

  1. 1

    See Michael Scherer, “Inside the Secret World of the Data Crunchers Who Helped Obama Win,” Time, November 7, 2012, and Nate Silver, “In Silicon Valley, Technology Talent Gap Threatens GOP Campaigns,” The New York Times, November 28, 2012. 

  2. 2

    See Sharon Bertsch McGrayne’s superb The Theory That Would Not Die (Yale University Press, 2011). 

  3. 3

    “Assessing the Representativeness of Public Opinion Surveys,” The Pew Research Center for the People and the Press, May 15, 2012. 

  4. 4

    Weatherall barely mentions Scott Patterson’s indispensable The Quants: How a New Breed of Math Whizzes Conquered Wall Street and Nearly Destroyed It (Crown Business, 2010). And some stories worth reading: Felix Salmon, “Recipe for Disaster: The Formula That Killed Wall Street,” Wired, March 2009; Dennis Overbye, “They Tried to Outsmart Wall Street,” The New York Times, March 9, 2009; Julie Creswell, “The Quants are Reeling,” The New York Times, August 20, 2010; Pablo Triana, “The Flawed Maths of Financial Models,” Financial Times, November 29, 2010. 

 
Brophy Saturday 26 January 2013 - 10:53 am | | Brophy Blog

No comments

(optional field)
(optional field)
Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.