Optimizing Risk Response, Unfiltered
I mentioned in a previous blog post that I just wrapped up two fairly large projects for ISACA: a whitepaper titled “Optimizing Risk Response” and a companion webinar titled “Rethinking Risk Response.”
The whitepaper was peer-reviewed with an academic tone. After reviewing my notes one last time, I decided to write up a post capturing some of my thoughts on the topic and process, of course, unfiltered and a little saltier than a whitepaper.
Behind the Scenes
I’m a member of ISACA’s Risk Advisory Group - a group that advises on ISACA webinars, blogs, whitepapers, journal articles, projects, and other products on the broad topic of risk. When the opportunity came up to write a whitepaper on the subject of risk response, I jumped at the chance. It seemed like a boring old topic that’s been around since the first formal risk management frameworks. I knew I needed to find a unique angle and spin on the topic to make it engaging and give risk managers something new to consider.
First came the literature review. I read the risk response sections of all major risk frameworks from technology, cybersecurity, operational risk, enterprise risk management, and even a few from financial risk. I also read blogs, articles, and project docs that included risk response topics. I came out of the literature review with a book full of notes that I summarized into the following four ideas:
The topic of risk response is not settled, especially in technology/IT risk. “Settled” means both standards bodies and practitioners generally agree on what risk response is and how to use it.
Risk response is erroneously synonymous with risk mitigation. Risk frameworks don’t make this mistake, but organizational implementations and practitioners do.
Most risk response frameworks assume the adoption of qualitative risk techniques, which makes it challenging, sometimes impossible, to weigh the pros and cons of each option. This is probably why most practitioners default to mitigate. Qualitative methods do not allow for the discrete analysis of different response options strategically applied to risk.
Employing risk response can be fraught with unintended consequences, such as moral hazard, secondary risk, and cyber insurance policy gaps.
Ah, so the angle became crystal-clear to me. The central themes of the whitepaper are:
Focusing on risk mitigation as the sole response option is inefficient.
Evaluation of each risk response option is an integral part of the risk management process.
Risk response doesn’t exist in a vacuum. It’s all part of helping the organization achieve its strategic objectives, bounded by risk tolerance.
Risk quantification is the tool you need to achieve efficient and optimized risk response, including identifying and reacting to unintended consequences.
The themes above gave the whitepaper a fresh take on an old topic. I’m also hoping that the practical examples of using risk quantification to gain efficiencies help practitioners see it as a strategic tool and nudge them closer to it.
Why Risk Response ≠ Risk Mitigation
Reacting and responding to risk is an embedded and innate part of the human psyche. All animals have a “fight or flight” response, which can be thought of as risk mitigation or risk avoidance, respectively. The concept of risk transference started forming in the 1700’s BCE with the invention of bottomry, a type of shipping insurance.
Abraham de Moivre, a French mathematician, changed the world in 1718 with a seemingly simple equation. He created the first definition of risk that paired the chances of something happening with potential losses.
“The Risk of losing any sum is the reverse of Expectation; and the true measure of it is, the product of the Sum adventured multiplied by the Probability of the Loss.” - Abraham de Moivre, The Doctrine of Chances (1718)
This evolved definition of risk changed the world and the way humans respond to it. Gut checks, “fight or flight,” and rudimentary forms of risk transference like bottomry were given the beginnings of an analytical framework, leading to better quality decisions. New industries were born. First, modern insurance and actuarial science (the first risk managers) sprung up at Lloyd’s of London. Many others followed. Modern risk management and analysis provided the ability to analyze response options and employ the best or a combination of the best options to further strategic objectives.
All risk management at this time was quantitative, except it wasn’t called “quantitative risk.” It was just called “risk.” Abraham de Moivre used numbers in his risk calculation, not colors. Quantitative methods evolved throughout the centuries, adding Monte Carlo methods as one example, but de Moivre’s definition of risk is unchanged - even today. If you are interested in the history of risk and risk quantification, read the short essay by Peter L. Bernstein, “The New Religion of Risk Management.”
Something changed in the late 1980’s and 1990’s. Business management diverged from all other risk fields, seeking easier and quicker methods. Qualitative analysis (colors, adjectives, ordinal scales) via the risk matrix was introduced. The new generation of risk managers using these techniques lost the ability to analytically use all options available to strategically react to risk. The matrix allows a risk manager to rank risks on a list, but not much more (see my blog post, The Elephant in the Risk Governance Room). The resulting list is best equipped for mitigation; if you have a list of 20 ranked risks, you mitigate risk #1, then #2, and so on. This is the exact opposite of an efficient and optimized response to risk.
In other words, when all you have is a hammer, everything looks like a nail.
It’s worth noting that other risk fields did not diverge in the 1980’s and 1990’s and still use quantitative risk analysis. (It’s just called “risk analysis.”)
Two examples of an over-emphasis on mitigation
The Wikipedia article on IT Risk Management (as of August 16, 2021) erroneously conflates risk mitigation with risk response. According to the article, the way an organization responds to risk is risk mitigation.
Second, the OWASP Risk Rating methodology also makes the same logical error. According to OWASP, after risk is assessed, an organization will “decide what to fix” and in what order.
To be fair, neither Wikipedia nor OWASP are risk management frameworks, but they are trusted and used by security professionals starting a risk program.
There are many more examples, but the point is made. In practice, the default way to react to IT / cyber risk is to mitigate. It’s what we security professionals are programmed to do, but if we blindly do that, we’re potentially wasting resources. It’s certainly not a data-driven, analytical decision.
Where we’re heading
We’re in a time and age in which cybersecurity budgets are largely approved without thoughtful analysis, primarily due to fear. I believe the day will come when we will lose that final bit of trust the C-Suite has in us, and we’ll have to really perform forecasts, you know, with numbers, like operations, product, and finance folks already do. Decision-makers will insist on knowing how much risk a $10m project reduces, in numbers. I believe the catalyst will be an increase in cyberattacks like data breaches and ransomware, with a private sector largely unable to do anything about it. Lawsuits will start, alleging that companies using poor risk management techniques are not practicing due care to safeguard private information, critical infrastructure, etc.
I hope the whitepaper gives organizations new ideas on how to revive this old topic in risk management programs, and this unfiltered post explains why I think the subject is ripe for disruption. As usual, let me know in the comments below if you have feedback or questions.
“Optimizing Risk Response” | whitepaper
“Rethinking Risk Response” | webinar