Black Swans in Risk: Myth, Reality and Bad Metaphors
The term “Black Swan event” has been part of the risk management lexicon since its coinage in 2007 by Nassim Taleb in his eponymous book titled The Black Swan: The Impact of the Highly Improbable. Taleb uses the metaphor of the black swan to describe extreme outlier events that come as a surprise to the observer, and in hindsight, the observer rationalizes that they should have predicted it.
The metaphor is based on the old European assumption that all swans are white, until black swans were discovered in 1697 in Australia.
Russell Thomas recently spoke at SIRACon 2018 on this very subject in his presentation, “Think You Know Black Swans — Think Again.” In the talk, and associated blog post, Thomas deconstructs the metaphor and Taleb’s argument and expounds on the use and misuse of the term in modern risk management. One of the most illuminating areas of Thomas’ work is his observation that the term “Black Swan” is used in dual ways: both to dismiss probabilistic reasoning and to extend it to describe certain events in risk management that require extra explanation. In other words, Taleb’s definition of Black Swan is a condemnation of probabilistic reasoning, i.e., forecasting future events with some degree of certainty. The more pervasive definition is used to describe certain types of events within risk management, such as loss events commonly found in risk registers and heat maps in boardrooms across the globe. If it seems contradictory and confusing, it is.
From a purely practitioner point of view, it’s worth examining why the term Black Swan is used so often in risk management. It’s not because we’re trying to engage in a philosophical discussion about the unpredictability of tail risks, but rather that risk managers feel the need to separately call out extreme impact events, regardless of probability, because they pose an existential threat to a firm. With this goal in mind, risk managers can now focus on a) understanding why the term is so pervasive, and b) find a way to communicate the same intent without logical fallacies.
Black Swan Definition and Misuse
The most common definition of a Black Swan is: an event in which the probability of occurrence is low, but the impact is high. A contemporary example is a 1,000 year flood or 9/11. In these, and similar events, the impact is so extreme, risk managers have felt the need to classify these events separately; call them out with an asterisk (*) to tell decision makers not to be lulled into a false sense of security because the annualized risk is low. This is where the office-talk term “Black Swan” was born. It is an attempt to assign a special classification to these types of tail risks.
This isn’t an entirely accurate portrayal of Black Swan events, however, according to both Taleb and Thomas.
According to Taleb, a Black Swan event has these three attributes:
First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
After examining the complex scenarios in which these types of conditions exist, it’s clear that the concept Taleb is trying to describe is well beyond something that would be found in a risk register, and is, in fact a critique of modern risk management techniques. It is an oxymoron to include this term in a risk program or even use it to describe risks.
Despite these points, the term has entered the everyday lexicon and, along with Kleenex and Cyber, it’s here to stay. It’s become a generally accepted word to describe low probability, high impact events. Is there something better?
Factor Analysis of Information Risk (FAIR), the risk analysis model developed by Jack Jones, doesn’t deal directly on a philosophical level with Black Swan events, but it does provide risk managers with a few extra tools to describe circumstances around low probability, high impact events. These are called risk conditions.
“Risk Conditions”: The FAIR Way to Treat a Black Swan
Risk is what matters. When scenarios are presented to management, it doesn’t add much to the story if one risk has a higher probability or a lower probability than other types of risks, or if one impact is higher than the other. FAIR provides the taxonomy to assess, analyze and report risks based on a number of factors (e.g. threat capability, control strength, frequency of a loss event). Most risk managers have just minutes with senior executives and will avoid an in-depth discussion of individual factors and will, instead, focus on risk. Why do some risk managers focus on Black Swan events then?
Risk managers use the term because they need to communicate something extra — they need an extra tool to draw attention to those few extreme tail risks that could outright end a company. There may be something that can be done to reduce the impact (e.g. diversification of company resources in preparation for an earthquake) or perhaps nothing can be done (e.g. market or economic conditions that cause company or sector failure). Nevertheless, risk managers would be remiss to not point this out.
Risk conditions go beyond simply calling out low probability, high impact events. They specifically deal with low probability, high impact events that do not have any or have weak mitigating controls. Categorizing it this way makes sense when communicating risk. Extreme tail risks with no mitigating controls may get lost in annualized risk aggregation.
FAIR describes two risk conditions: unstable risk and fragile risk.
Unstable risk conditions describe a situation in which the probability of a loss event is low and there are no mitigating controls in place. It’s up to each organization to define what “low probability” means, but most firms describe events that happen every 100 years or less as low probability. An example of an unstable risk condition would be a DBA having unfettered, unmonitored access to personally identifiable information or a stack of confidential documents sitting in an unlocked room. The annualized loss exposure would probably be relatively low, but controls aren’t in place to lower the loss event frequency.
A fragile risk condition is very similar to an unstable risk condition; however, the distinction is that there is one control in place to reduce the threat event frequency, but no backup control(s). An example of this would be a critical SQL database is being backed up nightly, but there’re no other controls to protect against an availability event (e.g. disk mirroring, database mirroring).
Conclusion
Don’t fight the Black Swan battle — leave that to philosophers and risk thinkers — but try to understand why someone is calling something a Black Swan. Provide the tools, such as those provided by the FAIR taxonomy, to help business leaders and your colleagues conceptualize actual risk. Risk conditions describe these types of events and the unique risks they pose with greater clarity and without outdated, often misused metaphors.
Originally published at www.fairinstitute.org.