BACK TO TOP

The Importance of Recognizing Biases in Protective Intelligence Analysis

Sign up to receive our blog posts in your inbox.

 

 

We all have biases. We demonstrate this whenever we show an implicit, often unspoken tendency for or against someone or something. For better and for worse, biases are baked into our social, psychological, and cognitive evolution as human beings. Our pattern-seeking brains are hardwired to make predictions, and biases make life quicker and easier to predict.

Biases can be excellent things, for example, when we prefer foods that are good for us to foods that are bad for us, or when we teach our children not to get into cars with strangers. But biases can be nasty things, too, for example when we succumb to preconceived notions about race, religion, or culture and then make assumptions and generalizations according to those prejudices.

Even protective intelligence analysts are prone to biases; it can’t be helped. But we can help ourselves – and those who consume the intelligence that we produce – by recognizing our biases and doing our best to mitigate them.

Four types of biases – and what they mean for intelligence analysts

Analysts working in protective intelligence must overcome at least four different types of biases. Let’s take a look at each of these.

1. Cultural biases:

Everyone was born into a culture, and we naturally judge the world – and people of other cultures – according to our own cultural standards. For example, it’s not that the Japanese tendency to put the group before the individual is right, or that the American propensity to put the spotlight on the individual is wrong – they’re just expressions of different cultures. Where things go wrong for intelligence analysts is when our culturally-biased evaluations lead to intelligence failure.

For example, let’s say junior analyst who researched a foreign location with which he had no experience.  Instead of becoming familiar with the country’s culture, customs, etc., asking someone who was, or admitting no experience with the foreign culture, the analyst wrote a report that leaned heavily on his American background.

While this should have been obvious to the reviewer of the report, it wasn’t. Since the analyst’s previous reports had always been accurate and effective, the reviewer indulged in another bias and simply checked the report for grammatical errors and basic information before forwarding it to the marketing team headed abroad. The report, culturally-biased as it was, proved to be misleading and did the marketers more harm than good. It was a long time before the marketing team again trusted their intelligence colleagues.

2. Organizational biases:

Most of the work that intelligence analysts do occurs in large, complex organizations. Decision-making in large, complex organizations is based on authority and power as well as rational criteria. Organizational politics are as real as organizational charts. It should be no surprise that organizational biases can contribute to poor intelligence analysis.

Corporate and world histories are rife with intelligence failures due to organizational bias. Politics can enter into every stage of the intelligence cycle, from selectively framing what’s important in the planning phase, to meeting top-down expectations in the collection and analysis phases, and avoiding unpopular or “politically incorrect” conclusions in the analysis and dissemination phases. As much as we hate to admit it, this can happen in protective intelligence analysis, too.

When the highest-paid opinion in the room has spoken, it can be difficult for a lowly analyst to do anything but fall in line.

3. Self-interest biases:

We’re all motivated to “look out for number one” both as individuals and as groups. But when this motivation goes too far and we put our own interests above those of others or of the organization as a whole, things can quickly go south.

This is also true for intelligence analysis. It’s not just a case of analysts shaping intelligence for their own monetary or career gains, although this can happen, too. It also occurs because people interpret reality in their own way based on individual perspective, expertise, and personal hobby horses.

The self-interest bias can be mitigated by making decision-making criteria explicit and clear from the beginning and revisiting them frequently throughout the intelligence cycle. It also helps to recognize that different individuals and groups have different interests and that these must be faced openly even if this rocks the boat.

4. Cognitive biases:

If the three bias types mentioned above are hard to recognize and mitigate, cognitive biases rooted in the way our brains work can be even more difficult to combat. As Amos Tversky and Daniel Kahneman, whose groundbreaking research into cognitive biases stretches across five decades, wrote in 1974: “Cognitive Biases are mental errors caused by our simplified information processing strategies.” I.e., these biases are baked into how we as humans think and are not something we learn through experience or education.

Here, we turn our attention to how cognitive biases can influence protective intelligence analysis. As the CIA’s Richards J. Heuer, Jr. writes in Psychology of Intelligence Analysis, analysts must be aware of five cognitive biases in particular:

Vividness beats boring but true: Heuer describes it like this: “Information that is vivid, concrete, and personal has a greater impact on our thinking than pallid, abstract information that may actually have substantially greater value as evidence.” In other words, what stands out sticks – even if it’s not true.

Things we hear or experience ourselves rank higher than second-hand information. Stories, anecdotes, and cases influence us more than statistics. Again, it’s just how we’re wired. Analysts – and the consumers of our reports – can easily push aside piles of correct but abstract and boring data and instead give more importance to a memorable tale they heard first-hand.

No evidence? No problem:  Analysts are often tasked with coming up with reports on situations and issues for which significant information is lacking. If all the key evidence were there and easily accessible, then why would our stakeholders ask us to come up with it in the first place?

Unfortunately, you don’t know what you don’t know, and “out of sight, out of mind” can easily end up “out of report.” To mitigate this bias, analysts need to recognize what they don’t know, determine its relevance, and include this in their analyses and level of confidence in their findings.

Fooled by consistency: Consistency is a must for analysts, and inconsistencies in analysis and reporting are sure signs of trouble. But consistency is only great until it isn’t.

Information might be perceived as consistent – and valid – because it’s frequently correlated in the same way or repeated again and again. But apparent correlation plus repetition don’t add up to true. Information might seem consistent only because it all comes from the same source or limited group of sources. A small amount of consistent evidence, no matter how often it is repeated, should make analysts think hard about its validity and their level of confidence in evaluations based on it.

Basing too much on best guesses: We base judgments on different bits of information, and it’s hard to know when information is accurate and just how accurate it is. We might have misunderstood something, we might not have seen some connections, our sources might want to mislead us…the list goes on. But life goes on, too. At some point, we will have to decide about the validity of our information.

Heuer writes that because we have such a hard time dealing with complicated probabilistic relationships, we tend to default to a “best guess” strategy, short-circuit the difficulty, and simplify matters into a yes-no decision: either this bit of information is accurate, or it isn’t.

Instead of betting too much on best guesses, Heuer recommends breaking the issues down so separate bits of information can be assigned separate levels of probability. Then, the analyst needs to figure out how to integrate all of these different probability levels into an overall evaluation. Again, easier said than done – but much better to try than not.

Sticking to bad evidence despite the facts: Finally, Heuer asks us to consider another cognitive bias that baffles psychologists: “Impressions tend to persist even after the evidence that created those impressions has been fully discredited.” Once we believe something, we tend to keep believing it even though the reasons for our beliefs are proven wrong.

This is not just stubbornness. Subjects in numerous psychological tests continue to stick to false beliefs they were inculcated with as part of the test, even when the researchers went out of their way to show that the beliefs were based on false evidence.

Analysts can run into trouble with this bias, too. If they have made judgments based on sources they deemed accurate, and the reliability of the sources later comes into question, it can be difficult to rethink the entire judgement – even if this might be called for.

Know your biases – and use structure and tools to mitigate them in protective intelligence

Use interview questions about the candidate’s awareness of their own biases as a tool to gauge analyst maturity and opportunities for mentoring.  Most junior analysts are either not aware of their own biases or think they have none.  They concentrate on the process, research, and finished product. If you follow it up with, “How would you start to learn what your biases are?”, and they stick to the notion that they have no biases, then you might want to ask even more questions.

We all have biases, so the only way to mitigate them is through structure and tools which force us to address them in the products we provide.

Systematic approaches to intelligence analysis, like the Structured Analytic Techniques (SAT), are often utilized at the CIA. SAT runs through analytic techniques to ensure that analysts utilize accurate information, challenge initial hypotheses, and review all possibilities.  SAT has been criticized as somewhat outdated and for not doing enough to counter the impact of biases, but it is a good starting point for young analysts – and for the rest of us.