Intelligence agencies are good at production, but how often do we go back to previously published work to see if our products stood the test of time? In my experience, self-examination comes only after publicly-exposed intelligence failures. More often than not, once a department hits the dissemination button, the process ends.
But postmortem analysis offers widespread benefits as pointed out by researchers David R. Mandel and Alan Barnes in their study, “Accuracy of forecasts in strategic intelligence,” published by PNAS. Tracking the precision rate of forecasts bolsters the confidence of intelligence consumers, helps managers assess their unit’s performance, offers analysts an opportunity to refine their technique, and gives citizens faith in their IC.
Postmortem analysis should encompass two areas.
- The accuracy of current findings and judgments | If new information arises, do published findings stand? If not, where were the weaknesses? Did the writer make an inaccurate assumption? Did he or she draw a link without sufficient evidence, or rely on a faulty source?
- The accuracy of forecasts and projections | Was a forecast on the mark? Did it come to pass by the end a projected timeframe?
A practice to consider is to assign a group to conduct periodic reviews of disseminated intelligence, maybe quarterly, annually, or ad hoc, based on forecast target dates. Reviewers would evaluate an assessment’s key judgments, as well as its predictions. Was the writer too cautious, too bold? Were words of estimative probability appropriate? Did new investigative findings warrant a revision to the original piece? Is a forecast still on the horizon, and might that merit an update?
What lessons came from successes/failures? What fed into accuracies/inaccuracies? Are successes repeatable/trainable?
The authors of a two-year, tournament-based study, “The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics,” identified many dispositional, situational, and behavioral variables among the best geopolitical forecasters. The authors identified characteristics such as intelligence and cognitive style as indicators of success, although these qualities are mostly inherent. However, they also found a positive environment, regular and timely feedback, and plenty of chances to practice factored into successful predictions. These latter attributes are more readily available and more easily implemented, so they may offer intelligence shops opportunities for improvement.
Here is further discussion from John Horgan, Scientific American, in his interview with Philip Tetlock, “Can We Improve Predictions? Q&A with Philip ‘Superforecasting’ Tetlock.” Dr. Tetlock discusses his book, Superforecasting: The Art and Science of Prediction.