Updated: Dec 19, 2019
There are lots of ways to conduct a security risk assessment but this is one model that we developed over the years. It's a highly visual model which has taken some time and thought to put together. Hopefully you'll find it useful or at least thought-provoking.
The graphic is designed to be expressive and act as a guide rather than the definitive approach in all situations. When I use it, I tend to think of it as a framework or checklist, a sort of mental model.
It's only really practical for large organizations with complex assessment requirements and in that environment, it can serve a lot of functions. For example:
Have I considered all the elements (whether explicitly or incorporated in some other element) and are they at some level considered in the assessment?
Does the structure of the investigation (and the final report) flow in a similar logical fashion?
A framework for the assessment team and I to make sure we're all on the same page in terms of definitions and relationships (eg: Threat = Intent x Capability, but what elements go into establishing 'Intent').
Explaining to clients or bosses how the overall framework hangs together and it's alignment with ISO31000.
When Miles and I and the broader SRMBOK team put it together, we looked at dozens of security and risk related assessment frameworks. They all approached things from a different perspective but we wanted a way to reflect how they all fitted together. For example, you'll see all the elements of the CARVER vulnerability model integrated into the framework.
I (and probably most people) tend to cherry pick and use what is more relevant in a particular situation. It's mostly a question of scale. For a small organization or small project, I might just list the criticalities and threats without getting too granular. For a large enterprise, the devil is in the detail so more detailed white boarding and analysis is worth it. But if I cover everything (at least mentally) it stimulates the thinking processes and makes for a more credible analysis.
Some people have pointed out that the Hierarchy of Controls (HoC) in the Risk Treatment phase (ESIEAP) is not designed to address a motivated and responsive attacker. I like using a HoC model when selecting treatment as it brings some life to Swiss-Cheese. HFACS, DDDRR, and other models are also good ways of looking at things but ESIEAP seemed like the most robust one for developing and prioritising treatments. In the end, no risk model is fully evolved to deal with adaptive attackers. It's a dynamic and iterative process.
HoC also provides a useful framework for explaining to clients and bosses why one treatment is recommended as a higher priority than others; or the implications of taking the lower cost treatment over a high-cost or less convenient solution. For example, selecting low cost file level security versus more expensive end-to-end encryption; or training people not to click on the unknown links versus a comprehensive whitelisting/blacklisting solution.
Doing both might be best but many budget holders would prefer to pay for a sign saying 'keep away from the edge' than to erect guard rails at the cliff edge. They are both 'treatments' but when you put them into HoC it's clear which is better.
And yes, the model is designed to be equally applicable to be useful in all forms of security treatment and analysis. Cyber, physical, personnel, information, etc. The scope and focus of the security risk assessment (SRA) might place more emphasis on different aspects but the framework can fit any analysis.
Take, for example, a software pentesting exercise. Pentesting is useful on its own but is only one aspect of a security review. Now, if the scope is to identify software/network issues then a penetration test is a huge part of the review. Maybe that's all that is required (of that particular scope). It can also generate a treatment plan.
In terms of a full SRA however, it's only one input. And would inform mostly the vulnerability area. Adding in human factors, risk criteria, threat actors, criticality, staff competence, likelihood, and more would all be factors to consider.
Equally, it's a different kettle of fish (but same overall model) if the scope is purely technology related. Different again if the assessment addresses both technology and information.
But the model is just a starting point. As George Box said, "All models are wrong. Some are useful."
There is another section in the book which I'll post here soon; which refers to the last element in the HoC. Personal Protective Equipment. I should probably tweak that model a bit because it sounds very much like it's refering to workplace health and safety. Indeed, that is where it comes from, but in my mind at least, its usage here refers to a last line of defense.
In other examples I've just used "Protective Equipment" or "Protective Markings & Equipment" which can be interpreted as:
Ballistic vests/helmets for hostile environments
Secure briefcases for transporting valuables
Markings on folders such printing "Confidential" in large letters in red ink on a folder
File level encryption
Tamper seals, Torx screws, or secure screws on server cabinets
I've done a few examples over the years and will put some up at this website soon. :)