Reports recently surfaced of FBI personnel involving themselves with a prominent social media company to control the spread of “misinformation” and the squelching of “disinformation.” Catching and suppressing or eliminating speech spread on social media deemed false or inaccurate is an insurmountable goal, and one fraught with problems, for example, limited resources. But the larger issue is intelligence agencies can be as challenged by the task as the rest of us.
Famously, a group of senior-level national security and intelligence professionals labeled an incident that occurred shortly before the 2020 election as a “Russian information operation.” They admittedly relied on their professional judgment, not evidence or analysis, but were confident enough in their conclusion to add their names to a published letter. Time and investigation found their assessment to be incorrect.
Of course, one lesson here is not to rely on gut instinct, regardless of how confident we are in our conclusions. But it also illustrates the challenge of judging whether information is “good” or “bad,” “mis-” or “dis-,” even for intelligence professionals.
One roadblock here is teaching analysts how to assess and verify data is an underemphasized part of most intelligence studies programs. Maybe program managers assume that by definition an analyst understands how to identify biased, inaccurate, or incomplete reporting. But assessing data quality should be incorporated into virtually every aspect of intelligence training because it is a difficult undertaking, yet, at the same time, the core element of analysis.
Human thinking is too often influenced by emotion, history, mental inflexibility, over-confidence, pride, mirror-imaging, and ego, to name a few. Sometimes, we’re lucky and we effortlessly–and by chance–land on the correct answer. But as has been stated frequently in Intelligence Shop posts, when we’re wrong, we risk our professional reputations and in some cases, the reputation of our home agencies.
The most effective way to evaluate a data set or any ambiguous circumstances is to apply a formal thinking technique. This is especially true if the problem we’re examining elicits an emotional response. Emotions can introduce bias because they interfere with our ability to reason, thus they increase the probability for error. If we have strong feelings about the problem we’re assessing, before even beginning our analysis, it’s a big clue we need an orderly way to examine it.
Here are a few formal thinking techniques to consider with short explanations of each (below): Key Assumptions Check; Team A/Team B Analysis; Devil’s Advocacy; Deception Detection; the Analysis of Competing Hypotheses. All of these choices are helpful in reducing the influence of opinion and assumptions as we seek to separate fact from fiction.
An assessment takes longer to produce with formal methods than with gut instinct, but it is slow, controlled, and logical thinking that distinguishes the intelligence analyst.
Key Assumptions Check – Identify and challenge each of the assumptions underlying the analysis. Restate or eliminate your assumptions as appropriate. Group exercise. Conduct at beginning of project.
Team A/Team B Analysis – Good for issues where there are strongly competing viewpoints. Team exercise. Two teams argue before a “jury” for which side has the stronger case.
Devil’s Advocacy – Challenge a widely-held viewpoint with alternative analyses. Good for breaking through mindsets and group think. Conduct prior to publishing.
Deception Detection – Checklist of questions to ask/answer about your subject to identify possible deception. Can be done with small group. Conduct prior to publishing.
Analysis of Competing Hypotheses – Eliminate hypotheses with too many inconsistencies to support evidence at hand. Single analyst or small team. Comprehensive; conduct throughout analysis.