The Analytical Statement: “What/So What?”

Intelligence analysis is the interpretation of facts. Analysts examine a scenario or data set, put the facts into context, add perspective, and explain to a decision maker why it all matters. The written format of an analytical statement is sometimes described as the “what/so what?” The “what” is the fact; the “so what” is its relevance. In an intelligence assessment, the thesis, which is the overarching finding of the work, and the key judgments that support the thesis, are all based in analysis, thus follow the what/so what format.

Here are some examples of analytically-formatted thesis statements and key judgments from products published by the IC:

1. “Afghanistan’s progress since the end of Taliban rule toward meeting broadly accepted international standards for the conditions of women has been uneven, reflecting cultural norms and conflict.(source: Afghanistan: Women’s Economic, Political, and Social Status Driven by Cultural Norms, published in the National Intelligence Council Sense of the Community memorandum, 2 April 2021)

What/so what: The “what,” or the factual part of the statement, is that Afghanistan’s progress toward meeting accepted international standards for the conditions for Afghani women since the end of Taliban rule has been uneven. Why? The analysis, or the “so what,” found cultural norms and conflict explained the uneven progress.

2. “We assess with high confidence that Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election, the consistent goals of which were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency.(source: Assessing Russian Activities and Intentions in Recent US Elections, published 6 January 2017)

What/so what: The “what” is Russian President Putin ordered an influence campaign in 2016 aimed at the US presidential election. Why? The analysis, or “so what,” found Putin’s goal was to undermine faith in the US democratic process, denigrate Secretary Clinton, and to harm her electability and potential presidency.

3. “UAP sightings also tended to cluster around U.S. training and testing grounds, but we assess that this may result from a collection bias as a result of focused attention, greater numbers of latest-generation sensors operating in those areas, unit expectations, and guidance to report anomalies.(source: Preliminary Assessment: Unidentified Aerial Phenomena, published 25 June 2021)

What/so what: The “what” is UAP (unidentified aerial phenomena) sightings tended to cluster around US training and testing grounds. Why? Analysis (the so what) found the likely reason was these facilities had the equipment to detect such phenomena as well as personnel who were trained to identify and report it.


The above examples are presented solely because they reflect the what/so what format. This is not an endorsement of the underlying analysis or the accuracy of the conclusions.





A Strong Title Delivers Your Bottom Line

The title of an intelligence product is a shortened version of the thesis; it delivers the product’s bottom line. A title should contain as much of the who, what, where, when, why, and how of the thesis statement as possible while still being concise. Titles are written in the form of an incomplete sentence. They use active voice. And they are generally written in the past tense because the analysis has been completed and/or the events the analysis describes have already occurred. If you have a well-constructed title, your audience will grasp the central concept of your product without reading further.

Here are the titles of three publicly-released intelligence assessments from the IC along with their thesis statements, comments, and suggested title revisions.

  1. Assessing Russian Activities and Intentions in Recent US Elections,” published 6 January 2017

Thesis statement: “Russian efforts to influence the 2016 US presidential election represent the most recent expression of Moscow’s longstanding desire to undermine the US-led liberal democratic order, but these activities demonstrated a significant escalation in directness, level of activity, and scope of effort compared to previous operations.

“We assess Russian President Vladimir Putin ordered an influence campaign in 2016 aimed at the US presidential election. Russia’s goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency. We further assess Putin and the Russian Government developed a clear preference for President-elect Trump. We have high confidence in these judgments.”

Comment: The title does not convey the assessment’s bottom line. It more closely describes the work of the analysts in preparing the assessment.

Proposed revision: “Significant Escalation in Influence Activities As Russia Sought to Sway 2016 US Presidential Election”


2. “Assessing the Saudi Government’s Role in the Killing of Jamal Khashoggi,” published 2 February 2021

Thesis statement: “We assess that Saudi Arabia’s Crown Prince Muhammad bin Salman approved an operation in Istanbul, Turkey(,) to capture or kill Saudi journalist Jamal Khashoggi.”

Comment: As with the previous example, the title better describes the work of the analysts, rather than to present the analytical core of the assessment.

Proposed revision: “Saudi Crown Prince Approved 2018 Operation in Istanbul, Turkey, Targeting Jamal Khashoggi”


3. “(U) Domestic Violent Extremism Poses Heightened Threat in 2021,” published 1 March 2021

Thesis statement: “The IC assesses that domestic violent extremists (DVEs) who are motivated by a range of ideologies and galvanized by recent political and societal events in the United States pose an elevated threat to the Homeland in 2021.”

Comment: This title is moving in the right direction. It has a who, what, where, and when, but it excludes the analysis. Why do analysts anticipate a heightened threat of domestic violent extremist activity in 2021?

Proposed revision: “Domestic Violent Extremists, Galvanized by Recent Sociopolitical Events, Pose Elevated Threat to Homeland in 2021”

(Note: The title is written in future tense because the author is making a prediction, which is not recommended.


Finally, writing titles in intelligence products is different from journalism in that intelligence products tell the whole story up front, rather than enticing readers with partial details or provocative* headlines.

*This appears to be a contemporary practice in journalism. It may be influenced by the increasing amount of online journalism that is more dependent on site visits for revenue.


Analysis, Not Opinions

An intelligence analyst may be a subject matter “expert.” This expertise could be derived from years of service, formal education, or intimate knowledge of a topic, such as growing up in the country of his or her assigned portfolio. Still, when it comes to answering an intelligence question, the role of the analyst is to provide that answer through tradecraft, not opinion, however considered that opinion may be.

Intelligence analysis is a specific discipline that takes decades to master. Analysts are trained to answer questions through formal techniques whose purpose is to eliminate reliance on opinions along with the negative qualities that accompany them, such as mirror imaging, mindsets, and bias. Analysts take themselves out of an equation, rather than put themselves into the middle of one.

An analyst may be asked for an opinion within a given area of expertise. The person making the inquiry, or even the analyst him or herself, may feel an answer should be provided forthwith. But a spontaneous response will likely be based on opinion. It may come from an educated place, but the result is still a mental shortcut.

Analytical tradecraft takes a more rigorous approach: the development of an intelligence question; research; the exploration of alternative hypotheses; and the invalidation of those hypotheses with too many inconsistencies to be probable. The purpose is to reach an objective, defensible, and retraceable conclusion.

Here is a personal example that shows the pitfalls of opinion and the benefits of formal methods. In 2007, a security camera in a local metro station caught a subject spilling a substance–later identified as mercury–onto a subway platform. Some members of law enforcement who viewed the tape called the incident a test of security in anticipation of an act of terrorism. Others said it was a harmless accident. For context, there had been multiple high-profile attacks against transportation systems in Madrid (2004), London (2005), and Mumbai (2006), that collectively killed close to 1,000 persons and injured several thousand more.

On the surface, the incident was indeed suspicious. The potential target was a prominent US transportation system; there had been several attacks against transportation systems around the world in the three years leading up to the incident; and the subject’s behavior, as seen on CCTV, was atypical. But this initial assessment was influenced by recency bias and mirror imaging.

Upon the request of local authorities, a formal analysis was launched. Initial brainstorming found at least a dozen scenarios that could equally account for the facts of the case. Analysis also uncovered multiple less obvious, but critical circumstances from the tape that were overlooked. Ultimately, the Analysis of Competing Hypotheses found all but one hypothesis–accidental release–could be eliminated because the evidence did not support them. Subsequent investigation, including identifying and interviewing the subject, confirmed it was a harmless act.

Adhering to analytical tradecraft, declining any incentive to reach a preferred conclusion, and similarly, not yielding to pressure to satisfy a particular audience, are key to building long-term credibility as an analyst.

The Fallibility Of Data

The phrase “science and data” is used widely to imply an indisputable truth. Science has the weight of repeatable experiments with consistent results to support it, but data are simply individual facts that are of limited value unless they are extrapolative to a population, and they are correctly interpreted and contextualized. If the data are wrong, or if they are missing partially or completely, then it is easy to reach a misleading or false conclusion.

Data that are accessible, complete, and accurate are the beginning point of a defensible analysis. But getting a hold of data that meet these criteria is harder than it appears. As an example, using US attorney’s office figures seems like a good way to examine domestic terrorism trends. But a study based on Transactional Records Access Clearinghouse (TRAC) records conducted by Syracuse University found:

“U.S. Attorneys’ offices vary greatly in their numbers of domestic terrorism prosecutions. The largest during 2020, a total of 78 prosecutions, were brought in Oregon federal courts…

“At the other extreme, many U.S. Attorneys’ offices across the country brought no domestic terrorism suits, or just a single suit in all of FY 2020. This includes the U.S. Attorney in the Western District of Washington (Seattle) who was recorded as bringing only a single domestic terrorism suit, although protests there[,] similar to those in nearby Portland, Oregon, had figured prominently in the news.”

This brings up a point: Which data are “correct”? You’ll reach completely different conclusions about the severity of the domestic terrorist issue if you use the Oregon data set or the Western District of Washington data set. The difference in prosecutions illustrates the error-prone nature of data that rely on human judgment and decision making, as well as the influence of politics.

Here are a few tips to help strengthen your data collection.

  1. Be aware of outliers. In the example of the US attorney’s offices in Oregon and the Western District of Washington, did one track with national trends while the other did not? You may want to incorporate the former into a broader data set. At the same time, don’t dismiss outliers. They can be interesting unto themselves. They may broaden the perspective of your findings, or else launch new inquiries.
  2. Most non-profits and NGOs focus on specific causes. Some engage in data collection. Keep in mind, if they do collect data, its use is likely to demonstrate the gravity of their cause. Use it judiciously, and be sure it fits your data collection methodology.
  3. Unsubstantiated tips into law enforcement agencies can be interesting, and might be of collateral value. However, your data set will be stronger if you focus on cases that have gone through the rigor of the investigative process.

Compiling A Defensible Data Set

Compiling a data set to support an intelligence project seems like a straightforward process: define attributes, select entities, and add them to a database for later sorting and analysis. In reality, the process can be challenging to get right.

One of the first obstacles is finding data. There are three choices: assemble your own data set, work from an established database, or combine the two methods. Unfortunately, the IC does not have a searchable, comprehensive, and current database of all reported crimes. Some private companies maintain subject-specific databases, although they may have limited date ranges. Here are some examples:

  1. The Global Terrorism Database is an excellent resource for incident of domestic and international terrorism, but its data stop at 31 December 2019.
  2. Mother Jones has good and accessible database of mass shootings in the United States from 1982 to 2021.
  3. The Washington Post offers a database of police-involved shootings. Their data range from 2015 to present.
  4. The Transactional Records Access Clearinghouse maintained by Syracuse University has quite a bit of good quantitative data on federal law enforcement and immigration matters.

If you choose to compile your own data set, you first need to choose parameters. Let’s say you want to address an issue related to domestic violent extremism in your area of responsibility (AOR). What do you include in your data set? Only federally-charged crimes? Only state-charged crimes? Subjects whose actions caused harm to persons (killed, maimed, is there a threshold for the number of victims)? Property damage (dollar amount)? Disrupted plots (no harm to persons, nor damage to property)? A subject or subjects who may have initially been charged with a domestic violent extremist-related charge, but who pled down to a lesser crime? Financial crimes perpetrated in furtherance of domestic violent extremist activity? Stings? Threats?

You’re also likely have to create categories or use existing categories to sort your data in order to discuss and describe the results of your analysis. The FBI uses these categories to sort domestic violent extremists: racially or ethnically motivated; anti-government or anti-authority; animal rights/environmental; abortion-related; all other domestic terrorism threats. You can use these classifications or define your own. If a subject fits into two categories, you’ll probably have to choose one in order to maintain the integrity of the numbers, and it’s a good idea to be prepared to defend your reasoning.

There is also the decision about geographic range. You may start with data from your AOR, but you can add greater qualitative and quantitative context if you compare your results to a broader data set.

And there is the time frame to consider. The more data and the longer the time span, the better position you are in to see trends. You may have noted an uptick in the past year, but if you widen the time span, is it still significant?

If you choose to use an existing database, review the organizer’s methodology for gathering data and the parameters for entry to be sure you agree with the method. If you add to it, use the same methodology. If you compile your own unique data set, define your attributes up front and stick with them. A strong data set is consistent and comprehensive.

%d bloggers like this: