News
Garber Announces Advisory Committee for Harvard Law School Dean Search
News
First Harvard Prize Book in Kosovo Established by Harvard Alumni
News
Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend
News
Harvard Faculty Appeal Temporary Suspensions From Widener Library
News
Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty
When the financial giant American Insurance Group began wading into a world of new and complex financial instruments, putting hundreds of billions of dollars on the line, the company’s top brass was unconcerned.
AIG had hired Gary B. Gorton—a financial economist at Yale—to use quantitative models to project the worst case scenario for the company’s balance sheet.
Using historical data, the models predicted a rosy future not too-unlike the recent, prosperous past, giving AIG’s leadership confidence in entering uncharted markets.
But by last September, one of AIG’s gambles had all but destroyed the institution—once the 18th largest public company in the world. To prevent an abrupt and potentially catastrophic collapse, AIG was forced to take a $182.5 billion lifeline from the U.S. government to cover losses that its forecasts indicated were never supposed to happen.
Quantitative models like Gorton’s—equally likely to emerge on a dusty blackboard as the frenzied trading floor—have come under fire over the past year, which saw a seismic reshaping of the global financial landscape.
Mere months later, academic economists are for the most part presenting a sanguine front to the world. Despite the unprecedented collapse of several Wall Street giants that relied on quantitative forecasting, they say that the fundamentals of quantitative techniques remain intact.
But at the same time, there is a new note of humility—an explicit recognition that the world is complex, formulas are imperfect, and humans are fallible.
THE GREAT UNKNOWN
While debates continue to rage over the length, severity, or causes of the financial crisis, economists have agreed on one of its effects: a renewed caution about the predictive powers of mathematics.
Harvard Business School professor Robert C. Merton says that finance, unlike other subfields of economics, had never claimed to forecast the exact movements of securities prices.
Merton was a co-recipient of the Nobel prize in economics in 1997 for his work on the Black-Scholes model, ubiquitously used by traders to price options and determine expected volatility.
But he has also gained notoriety in financial circles for his membership on the board of directors of Long-Term Capital Management, whose collapse threatened to imperil the global economy in 1998, prompting a government-orchestrated intervention.
“If you look at the options model, it’s all about understanding the risk term,” he says. “Most of the outcomes aren’t forecastable, so uncertainty is the biggest permeator here.”
And a “fundamental uncertainty principle,” according to economics professor Jeremy C. Stein, means that the best model fed an infinite amount of data would still fail to predict securities prices perfectly. Relying heavily on this dataset and acting according to the outputs of the models would change economic parameters so that the original assumptions are no longer valid.
“Unlike in the physical sciences, where with enough data you can learn what the truth is, in markets the truth is moving around a little bit,” Stein says.
In a fundamentally uncertain world, placing too much faith in predictions, no matter how sophisticated, can be a mistake.
“[People] think of us as having much better forecasting skills than we really do have,” Merton says.
HUMAN ERROR
Quantitative techniques may have generated inaccurate predictions, but the final responsibility for the poor decision-making that ensued rests with the individuals involved and not the forecasting methods, many economists said.
“Models are nothing but an extension of human beings,” Merton sys. “In the real world of finance, no one turns the computer on and walks out the door.”
But in the last few years, a trickle of physicists, computer scientists, and mathematicians to Wall Street has diluted the level of understanding of the formulas and algorithms processed by their high-powered computers, says economics professor David I. Laibson ’88. In certain cases, some economists say, their lack of economic intuition may have caused them to make incorrect assumptions that biased the models.
One of the quantitative modeling concepts that has come under heaviest fire is Value at Risk, which, unlike Black-Scholes, was born of the demands of the trading floor.
VaR projects the maximum loss a company will suffer in a given day assuming typical market conditions, such as 95 or 99 percent of the time. Senior management can then control the risks assumed by the firm by directing traders to increase or decrease the VaR, depending on the company’s appetite for risk.
But VaR modeling does not describe events that occur the other 1 percent of the time. In the New York Times best-seller “The Black Swan: The Impact of the Highly Improbable,” Nassim N. Taleb, who has held positions at Columbia, Wharton, and NYU, argues that these events—which he dubs “black swans”—are the most important determinants of the course of history. Models that fail to account for them are all but useless.
Even so, Merton insists that the “degree of sophistication” and the “mindset” in risk management have greatly improved in recent decades, since VaR was developed and popularized. But as its use became widespread, its users became, on average, less knowledgeable about its application than the group of traders who created it in the late 1980s and early 1990s.
“Some people thought they could hire a physics guy and put him to work in a ratings agency with no economics experience,” says Laibson, adding that economists recognized the inherent dangers of these models long before the current crisis.
“The world has learned a lesson, but it hasn’t changed my world view,” he says.
Merton questions whether the systemic failure of global financial systems can be attributed entirely to over-reliance on quantitative models.
“Will there be cases where people over-relied on models? Absolutely,” he says. “But is that a general proposition? Not so clear.”
In some situations, senior management may have failed to understand the models or may have ignored their implications—such as when they advise reducing profit to control risk, even when other companies are pressing forward.
While backward-looking VaR models relied entirely on data from past years, forward-looking VaR models were able to pick up on the increased volatility of the market before securities prices took a nosedive, says Aaron C. Brown ’78, a risk manager at the quant hedge fund AQR Capital Management. But when the forward-looking VaR suddenly rose—reflecting a dramatic increase in risk—warning signs went unheeded.
“People have a tendency to ignore that and to say, ‘Well, just clean up the numbers,’” Brown says. “If the model never surprises you, the market surprises you instead.”
In light of these problems in the risk management process, Merton argues for more education in quantitative techniques, instead of shying away from them altogether.
“One of the things I think comes out of this is a greater need for modern financial training and knowledge,” he says. “We should be teaching more about modern finance and these tools, not less.”
HARD TO STUDY
While economists disagree over whether anyone could have predicted the snowball reactions of the financial markets, they acknowledge that such large and calamitous events are relatively unstudied.
Some say the scant attention paid to market meltdowns was a natural product of their infrequency, despite their outsized ramifications. The last systemic market failure—the events precipitating the Great Depression—has been studied as a historical anomaly and not as a recurring event.
But at the same time, economics as a field may be predisposed against forecasting the once-in-a-century catastrophes that result from market imperfections reinforcing one another.
“There has been a visceral reaction of many economists toward building market frictions or inefficiencies into their models—but that’s changing,” says outgoing economics chair James H. Stock, pointing to the significantly higher degree of difficulty involved in solving models that include the possibility of extremely unusual phenomena.
Other economists may not be asking the right questions, even before reaching the stage of setting up models.
The pressure on academic economists to publish prolifically, especially at the start of their careers, tends to encourage them to focus on problems that existing analytical tools can solve.
“The recipe for something to be a successful research project,” Stein says, “is that it has to be interesting to some people and has to offer an idea or tool that others can easily work with.”
In this sense, running more sophisticated regressions on a dataset may appeal more to young economists than searching for the most appropriate use of models or optimizing the balance of qualitative and quantitative insight. The uncertainty surrounding such topics can deter students by their sheer difficulty, even if they may yield more interesting or informative answers.
“The ‘arty’ part of it—how can you mix data and judgment—that’s a hard topic,” Stein says.
COMMON SENSE
Academicians are increasingly echoing the public call to embrace “common sense” as a reality check when models output projections that suggest untenable growth.
“I think we’ve learned a lesson about the limits of [quantitative modeling],” Stein says. “There’s a need to overlay it with softer, more qualitative judgment.”
This approach strikes few as particularly novel—some well-performing hedge funds, for instance, have always relied on a global view of economic trends rather than heavily statistical data analysis. And “common sense” also applies to deciding which data points are relevant in any given scenario, making it relevant even in a quantitative modeling context.
“Sophisticated analysts will combine many different tools—one part intuition, one part historical data, and one part mathematical modeling—to come up with good decisions,” Laibson says.
In practice, though, moving entirely to the “common sense” approach on the trading floor is far from a panacea for the chaos wreaked by unusual events.
Models may be imperfect, but so is human judgment. And firms trading thousands of different securities each day find it impossible to hire the manpower necessary to scrutinize each trade.
“The trouble with common sense,” Brown says, “is that it’s expensive.”
MOVING FORWARD
Despite the much-changed financial landscape, many academics said there was no need to overhaul economic thinking.
“I don’t think there’s any new particles being discovered,” Merton says, comparing the fundamentals of finance—market efficiency and risk-benefit analysis—to the building blocks of physical matter.
While several major financial institutions have disappeared, leaving experts to sift through the wreckage, many Harvard professors said that the backlash on quantitative models was largely unwarranted. Furthermore, they say quantitative models have become so prevalent in recent years—used everywhere from hedge funds to central banks—that drastically curtailing their use would cause more harm than good.
Laibson calls pundits’ suggestions that the failures of risk modeling demonstrate the failure of mathematical modeling at large an “extreme overreaction” to the events of recent months.
“I’m quite convinced if we threw out all mathematics in our efforts to model the financial world, we’d be shooting ourselves in the foot,” he says.
Economists acknowledged that the crisis has made it necessary to tweak models and redirect interest to questions that have previously received little attention. Economists should study how models can incorporate greater sensitivity to rare events, and how macroeconomic models can be better integrated with the outputs of behavioral finance models, says Stock.
But at the same time, academics may generally be more reluctant than finance professionals to reject theories that seem to falter in the real world.
“There’s a different mindset from the trading floor,” Brown says. “Traders have to cut losses at the first sign of trouble—academics want to dig deeper.”
—Staff writer Athena Y. Jiang can be reached at ajiang@fas.harvard.edu.
Want to keep up with breaking news? Subscribe to our email newsletter.