Security Risk Assessment in a nutshell

Updated: Jan 14

There are many ways to conduct a security risk assessment, but this is one model that we developed over the years. It's a highly visual model that has taken some time and thought to put together. Hopefully, you'll find it useful or at least thought-provoking.

The graphic is designed to be expressive and act as a guide. It's not the definitive approach in all situations. When I use it, I tend to think of it as a framework or checklist, a sort of mental model.

It's only efficient for large organizations with complex assessment requirements, and in that environment, it can serve several functions. For example:

  • Have I considered all the elements (whether explicitly or incorporated in some other factor), and are they at some level included in the assessment?

  • Does the structure of the investigation (and the final report) flow in a similar logical fashion?

  • A framework for the assessment team to make sure we're all on the same page in terms of definitions and relationships (eg: Threat = Intent x Capability, but what elements go into establishing 'Intent').

  • Explaining to clients or bosses how the overall framework hangs together and it's alignment with ISO31000.

When Miles and I and the broader SRMBOK team put it together, we looked at dozens of security and risk-related assessment frameworks. They all involved different perspectives, but we wanted a way to reflect how they all fitted together. For example, you'll see all the elements of the CARVER vulnerability model integrated into the framework.

I (and probably most people) tend to cherry-pick and use what is more relevant in a particular situation. It's mostly a question of scale. For a small organization or small project, I might list the criticalities and threats without getting too granular. For a large enterprise, the devil is in the detail, so more detailed whiteboarding and analysis are worth it. But if I cover everything (at least mentally), it stimulates the thinking processes and makes for a more credible analysis.

Some people have pointed out that the Hierarchy of Controls (HoC) in the Risk Treatment phase (ESIEAP) is not designed to address a motivated and responsive attacker. I like using an HoC model when selecting treatment as it brings some life to Swiss-Cheese. HFACS, DDDRR, and other models are also good ways of looking at things but ESIEAP seemed like the most robust one for developing and prioritising treatments. In the end, no risk model is fully evolved to deal with adaptive attackers. It's a dynamic and iterative process.

HoC also provides a useful framework for explaining to clients and bosses why one treatment is recommended as a higher priority than others; or the implications of taking the lower-cost treatment over a high-cost or less convenient solution. For example, selecting low-cost file-level security versus more expensive end-to-end encryption, or training people not to click on the unknown links versus a comprehensive whitelisting/blacklisting solution.

Doing both might be best, but many budget holders would prefer to pay for a sign saying 'keep away from the edge' than erect guard rails at the cliff edge. They are both 'treatments,' but when you put them into HoC, it's clear which is better.

And yes, the model is designed to be equally applicable in all forms of security treatment and analysis - cyber, physical, personnel, information, etc. The scope and focus of the security risk assessment (SRA) might emphasize different aspects but the framework can fit any analysis.

Take, for example, a software pentesting exercise. Pentesting is useful on its own but is only one aspect of a security review. If the scope is to identify software/network issues, then a penetration test is a huge part of the review. Maybe that's all that is required (of that particular scope). It can also generate a treatment plan.

In terms of a full SRA, however, it's only one input. And would inform mostly the vulnerability area. Adding in human factors, risk criteria, threat actors, criticality, staff competence, likelihood, and more would all be factors to consider.

Equally, it's entirely different (but the same overall model) if the scope is purely technology related. Different again if the assessment addresses both technology and information.

But the model is just a starting point. As George Box said, "All models are wrong. Some are useful."

There is another section in the book, which I'll post here soon, which refers to the last element in the HoC. Personal Protective Equipment. It sounds very much like it's referring to workplace health and safety. And that is where it comes from, but its usage here refers to a last line of defense.

In other examples, I've used "Protective Equipment" or "Protective Markings & Equipment," which can be interpreted as:

  • Ballistic vests/helmets for hostile environments

  • Secure briefcases for transporting valuables

  • Markings on folders such as printing "Confidential" in large letters in red ink on a folder

  • File-level encryption

  • Tamper seals, Torx screws, or secure screws on server cabinets

I've done a few examples over the years and will more on this website soon.

456 views0 comments

Recent Posts

See All

©2019 by Julian Talbot