One year after the first stirrings of the Arab Spring, we are still only beginning to digest the implications of this momentous turn of events. Yet, as commentators debate the political, economic, and religious consequences of the uprisings in Tunisia, Egypt, Libya, Syria and elsewhere, few have discerned their impact in less conspicuous quarters. Yet, far from the spotlight of media attention, the effects of Arab Spring are also rippling through the murky world of intelligence gathering.
Like the financial crisis of 2007-8, the Arab Spring caught everyone by surprise, including those whose job it is to anticipate things like this. Intelligence agencies in particular are supposed to spot potential flashpoints before they erupt. Little wonder, then, that by early February 2011 President Obama was criticising the CIA and other American spy agencies for failing to predict the spreading unrest in the Middle East.
This is nothing new - intelligence officials have long had to endure the wrath of American presidents, who often blame them for misjudging the events of the day. Nevertheless, Obama's comments got the spooks asking how they could help their analysts make better predictions.
Within weeks, researchers began recruiting volunteers for a multi-year, web-based study of people's ability to predict world events. Sponsored by the Intelligence Advanced Research Projects Activity (IARPA), the Forecasting World Events project aims to discover whether some kinds of personality are better than others at making accurate predictions by recruiting a diverse panel of participants interested in offering predictions about events and trends in international relations, social and cultural change, business and economics, public health, and science and technology.
Previous research is not encouraging. A famous study by the American psychologist Philip Tetlock asked 284 people who made their living "commenting or offering advice on political and economic trends" to estimate the probability of future events in both their areas of specialisation and in areas about which they claimed no expertise. Over the course of 20 years, Tetlock asked them to make a total of 82,361 forecasts. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf? And so on.
Tetlock put most of the forecasting questions into a "three possible futures" form, in which three alternative outcomes were presented: the persistence of the status quo, more of something (political freedom, economic growth), or less of something (repression, recession).
The results were embarrassing. The experts performed worse than they would have if they had simply assigned an equal probability to all three outcomes. Dart-throwing monkeys would have done better.
Furthermore, the pundits were not significantly better at forecasting events in their area of expertise than in at assessing the likelihood of events outside their field of study. Knowing a little helped a bit, but Tetlock found that knowing a lot can actually make a person less reliable.
"We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly," he observed. "In this age of academic hyperspecialisation, there is no reason for supposing that contributors to top journals - distinguished political scientists, area study specialists, economists, and so on - are any better than journalists or attentive readers of the New York Times in 'reading' emerging situations." And the more famous the forecaster, the lower their risk intelligence seemed to be; "Experts in demand," Tetlock noted, "were more overconfident than their colleagues who eked out existences far from the limelight."
Yet all is not lost. Not all experts are equally bad. Some, in fact, are surprisingly good, and their uncanny accuracy suggests that there may be a special kind of intelligence for thinking about risk and uncertainty which, given the right conditions, can be improved. For example, studies have shown US weather forecasters in particular to have high levels of risk intelligence.
Understanding why they are so good may offer clues as to how risk intelligence can be improved in others. Sarah Lichtenstein, a leading scholar in the field of judgment and decision making, speculates that several factors favour the weather forecasters. For example, they have been expressing their forecasts in terms of numerical probability estimates for many years; since 1965, US National Weather forecasters have been required to say not just whether or not it will rain the next day, but how likely they think this is, in actual percentage terms. They have got used to putting numbers on such things, and as a result they have got better at it.
What if the same thing were required from intelligence analysts? When forecasting world events and emerging security threats, intelligence analysts could be required to provide numerical probability estimates. Then, as the situation developed, the accuracy of those estimates could be quantified by means of calibration tests and the results fed back to the analysts.
Even if this suggestion were taken up, it wouldn't stop politicians passing the buck and blaming spies for what are often political mistakes. Improving the risk intelligence of intelligence analysts may be a soluble problem. Ensuring that the politicians remain aware of the uncertainties may not.