Risk is an abstract concept and humans are notoriously bad at predicting it.¹

A 1%chance of an event occurring does not mean that it won’t happen. Even subject matter experts are notoriously poor at calculating risks.²

Unfortunately, two of the common methods (WAGNER and BOGSAT methods-explained in the graphic below) are the least reliable.

More comprehensive methods of risk identification and estimation, require additional information, which usually comes at a greater cost. There are however, many relatively low cost ways to improve risk estimation. There is compelling research to indicate that risk matrices can often produce an inferior result to more quantitative methods³.

Below is some general guidance to help improve the accuracy of estimates:

Try to understand from the start, what you do and do not know and to what level of certainty/uncertainty.

Always seek and use the best available data, no matter how limited it may be. Where possible use annotated data, peer-reviewed research, quantitative data and statistical analysis to assess risk. Where this is not possible, document the nature and source of available information, the level of uncertainty, and any additional information that would be helpful (and the cost of attaining it).

Where comprehensive data is not available, it will still be useful to state whatever is reasonably known. For example: “We have 90% confidence that a server outage due to system breach will take at least 30 minutes to resolve but not more than 24 hours. Experience indicates that it is unlikely (5%) that we can bring it back online in under 30 minutes, and equally unlikely (5%) that it will take longer than 24hours.”

Use structured, defensible arguments such as causality diagrams, Analysis of Competing Hypotheses, Root Cause Analysis, 8 Step Process, Expected Monetary Value, and some of the frameworks from this blog.

Consider what additional data is required, and factor in the time/cost of acquiring it.

Break your estimates down to the right level of granularity-the more specific or granular the estimates, the more likely they are to be accurate.

Use Subject Matter Experts (SMEs) who are ‘calibrated’. Calibrated probability assessments are subjective probabilities assigned by individuals who have been trained to assess probabilities in a way that historically represents their uncertainty. By practicing with a series of trivia questions, it is possible for subjects to fine-tune their ability to assess probabilities.

Avoid making a point estimate. Instead develop a

*range*of possible outcomes. Eg: a 90% confidence that Attack ‘A’ if successful, will cost between $1 million and $8 million. This at least clarifies the range of uncertainty.Consider using Monte Carlo Simulation for more complex analysis.

Use causation diagrams such as RootCause Analysis or Ishikawa Diagrams.

Where possible, calculate an ExpectedMonetary Value

¹ Tetlock, Philip E.*Expert Political Judgment: How Good Is It?How Can We Know?*New Ed edition. Princeton, N.J.: Princeton University Press, 2006.

² Hubbard, Douglas W., and Richard Seiersen. *How to Measure Anything in Cybersecurity Risk. *Hoboken: Wiley, 2016.

³ Thomas, Philip, Reidar Bratvold, and J.Eric BIckel.‘The Risk of Using Risk Matrices’. New Orleans, 2013.

## コメント