Skip to content

Posted On



Proving Analysts Wrong – Part I

Do you have trouble admitting when you’re wrong?

Most of us do – and one reason for this reticence is being slow to acknowledge the possibility that we have made a mistake. Let’s face it- the only thing worse than being wrong is taking too long to realize it! Mistakes are inevitable, but we have the tools to help you avoid looking like a fool who can’t admit a blunder. Over the next few months, the Analytic Insider will explore a small group of Structured Analytic Techniques that can spur an analyst to admit that his or her analysis is wrong.

Technique #1: Indicators

When an analyst generates a scenario or makes a prediction that a certain event is likely to play out, a best practice is to develop a list of indicators or “things one would see” that suggest the scenario is actually emerging. A good analyst will also develop a set of observables that will indicate the scenario or event is not going to happen. If most of the indicators that the scenario will play out start to happen, then the analyst will be vindicated in her or his prediction and congratulated for outstanding insight. If, on the other hand, few or none of the indicators come into play and even some of the negative indicators start to “light up,” then the analyst has little choice but to admit that the analysis is flawed or undermined by unanticipated intervening events.

If the analyst has not developed a set of indicators to provide an objective baseline for the analysis, he or she most likely will be inclined to stick to the analysis as long as possible, focusing on the data points that confirm the proffered view and ignoring the mounting pile of evidence that contradicts the prediction.

The existence of a pre-determined set of baseline indicators, however, quickly illuminates this mistake. If “we all agreed we would be wrong if these things happen” and they start to happen, only a fool would refuse to admit that the initial analysis is wrong.

In today’s uncertain and increasingly confusing world, analysts (as well as political pundits) should develop lists of indicators to accompany their projections and predictions. At the Dahrendorf Symposium in Berlin last June, we developed five sets of scenarios and associated indicators for how EU relations would evolve with the United States, China, Ukraine, Turkey, and the Middle East (link to the report in Recommended Resources above). The next step will be to track the indicators to see which events are actually occurring, indicating which scenarios are the most apt forecasts for the EU’s future.

Indicators are a pre-established list of observable events that analysts periodically review to track events, spot emerging trends, and warn of unanticipated change.

Indicators can be grouped into two categories:

  • Validating or backward-looking: Past or current actions or activities that would help confirm that a target’s activities or behavior are consistent with an established pattern or historical norm. Characteristics that would help an analyst determine whether something meets a set criteria.
  • Anticipating or forward-looking: Future activities that one would expect to observe if a given hypothesis is correct or a predicted scenario is emerging. Developments that allow analysts to track how a situation is playing out over time.

The method for developing validating indicators is fairly straightforward. The first two steps usually are done working alone. The third and fourth steps can be an individual or a group process.

  • Precisely define the phenomenon being considered.
  • Locate an existing list of indicators that best describe this phenomenon (or develop your own based on historical research).
  • Assess how many of the historically-derived indicators are present in the case.
  • Assess whether the correlation is strong enough (enough indicators are present) to justify applying that label to the current case.

The method for developing anticipating indicators of change is somewhat more involved. The process can be done working alone or in a group:

  • Identify a set of alternative scenarios or competing hypotheses.
  • Generate a list of activities, statements, or events that one would expect to observe if the scenario is beginning to emerge or the hypothesis is coming true.
  • Examine the list to ensure that all the indicators are:
    • Observable and collectible. Can it be observed and reported by a reliable source and reliably collected over time?
    • Valid. Is it clearly relevant to the end state the analyst is trying to predict or assess? Does it accurately measure the concept or phenomenon at issue?
    • Reliable. Is data collection consistent when comparable methods are used? Those observing and collecting data must observe the same things. Reliability requires precise definition of the indicators.
    • Stable. Is it useful and consistent over time to allow comparisons and track events?
    • Unique. Does it measure only one thing and, in combination with other indicators, point only to the phenomenon being studied?
  • Periodically check the indicators list to track which scenario appears to be emerging or which hypothesis appears to be most correct.