Cyber security assurance built on bowtie risk analysis

Peter Lacey Risk Management News Leave a Comment

The field of cyber security risk is so complex because the threat landscape is ever-changing and the threats themselves can be constant.  Cyber security assurance becomes vital because cyber controls are like a dam: the water pressure is constantly present, flowing and seeking weak points.

Building a security risk management system using the ISO 27000 standard is a good start to providing some structure to the risk controls.  Bowtie risk analysis is a fantastic platform for exploring cyber security risks and identifying the critical controls across a range of different threat trajectories.

This article looks at the practicalities of building a cyber security assurance system built on cyber bowtie risk analysis.  Assuring constant protection (and continuous improvement) relies on:

  • Critical Controls framework: setting the right critical controls
  • Performance Standard framework: determining the right performance to audit
  • Cyber Security Assurance framework: developing a schedule and sequence of audit activities to provide assurance
  • Performance Improvement: following up on audit findings, corrective actions, and opportunities for improvement.

These steps form part of the critical control management approach.

This article is the second part in a series on cyber security risk analysis and assurance.  To learn more about using bowtie risk analysis using the ISO 27000 framework, read the first article here.  For an overview of critical control management in practice, read our article on live critical control management here.

Critical Controls framework

Here we are picking up on the risk treatment step of the ISO 27005 standard (sections 9.1 & 9.2).  We assume that we have already developed the bowtie analysis model for our cyber security risk scenarios.  This gives us a profile of the controls that we are using to protect against particular types of threat.  An example of what this looks like is shown below.

cyber-security-bowtie-risk-analysis

Setting critical controls allows the business to prioritise resources.  Rather than audit every single cyber security control that has been identified, we focus on the controls that make the most difference.  Our criteria for a critical control might include considerations such as:

  1. The control is the only one that is effective on a particular cause or consequence pathway.
  2. The control is effective against many different types of cause, consequence or threat.
  3. The control has a significant risk reduction impact compared to other controls on the same pathway.

Let’s look at a specific example of how this might apply to a bowtie:

cyber security assurance critical control selection

We have three controls on this causal line.  Applying the critical control criteria from above:

Monitoring of server access is not the only control on the line (not #1). If it was used in many other bowties, it might be a critical control because it is effective against many different types cause (#2).  It doesn’t have a significant risk reduction impact: the effectiveness is only “partially effective”, and because it is a deterrence-based control it relies on human compliance to be effective. NON-CRITICAL
Physical access controls on server room is not the only control on the line (not #1), and a review of our other risk assessments found that the control is not used elsewhere (not #2). It does have a significant risk reduction impact: the effectiveness is “mostly effective”, which is the highest rating on the line.  It is also a prevention-based control, which is preferable to controls which only deter or detect threats. CRITICAL
Locked-down interfaces (e.g. USB ports) is not the only control on the line (not #1), and a review of our other risk assessments found that it is not used elsewhere (not #2). The effectiveness of the control is not particularly high, but the fact that it is an elimination-based control means that it is more reliable than other types of control on the pathway. MAYBE CRITICAL

All of these criticality decisions are subjective and based on the best judgments of the individuals involved in the risk assessment process.

A key consideration during this process should be how essential it is that the control be regularly audited.  The purpose of denoting critical controls is to bring them into the cyber security assurance program, and therefore we can use this as a final litmus test to help determine whether we want a particular control to be critical.  For example:

  • Monitoring of server access: we can audit tangible aspects of the control (e.g. the type of CCTV or access control system monitoring we use), but the actual deterrent effect of the control is going to be very difficult to audit. It could also be argued that deterrence is much less important than physical prevention measures, and therefore our resources are better spent on good prevention than deterrence.
  • Physical access controls on server room: this control is probably the best immediate protection against physical attack, and also needs to be maintained. If we didn’t audit this control for a year, we would probably find that our physical security measures were not in serviceable condition (e.g. worn-out doorframes) or were no longer being observed (e.g. staff not bothering to keep the door locked at all times).  Regular auditing is therefore a good use of resources to protect against physical attack.
  • Locked-down interfaces (e.g. USB ports): this control provides good protection against one particular mode of physical attack, but doesn’t need to be maintained quite as regularly as other controls. If server equipment is purchased and installed with no useable USB ports, we don’t necessarily need to audit that once a month to know that the USB ports are still missing.

Setting a control as “non-critical” also does not mean that we abandon all interest in that control.  Some controls are already managed and monitored by other processes, meaning we can rely on them even if we do not include them in our cyber security assurance program.  Sometimes the process of assessing criticality will identify that we missed controls, which can then be added and also assessed.  For example: our locked-down interfaces control might not be critical, but the purchasing policy for security-critical equipment which governs the installation of new equipment might be the real critical control.

Critical controls can also be grouped to make the assurance process more manageable.  If several controls are similar in function, and are critical for the same reasons, there could be a good case to group them together for the purposes of critical control verification.

For example: server room physical access controls and office physical security systems are similar in function.  Both are designed to exclude unauthorised persons through physical barriers.  From an auditing perspective, we could probably audit both controls in the same day using the same audit tool.  Grouping them together under a single Base Control (common critical control) would therefore reduce the number of entities we target for verification without compromising on coverage.  For more on setting up Base Controls and other types of critical control group/class, see our articles here and here.

While our team is thinking about criticality, they can also start to think about the key aspects of performance that we would expect from those critical controls.  This is the first step to developing performance standards as part of the cyber security risk assurance program.

Performance Standards framework

Building performance standards is about determining the tools and criteria that can be used to assess critical control performance.  The performance standard becomes the audit tool that the auditor uses to check the real-world performance of a critical control.

Again, we are concerned with prioritising and getting the most from our limited resources for cyber security assurance.  We have narrowed down the list of the most critical controls, now we need to focus on the most important questions that need to be answered to provide assurance.

This is a two step process:

  1. Determine the areas of performance (Performance Elements) which are the most important for the critical control.
  2. Determine the specific performance attributes (Performance Criteria) which should be audited for each Performance Element.

Using the example of physical access controls, step 1 might determine the Performance Elements shown below:

Design The original design, equipment specifications and any modifications must meet the standard.
Installation The installation of the equipment and any modifications to the area must meet the standard.
Function Each piece of equipment must be fit-for-purpose and operating properly.
Maintenance Each piece of equipment must be regular inspected, serviced and maintained.

As part of this process the team needs to ask: “if we audited only the performance elements that we’ve identified, would we be confident that the control is working effectively?”

The same logic is applied to determining the Performance Criteria.  Read our article here to get some insights on how to build smarter Performance Criteria into performance standards.  Building on the example above, we might only develop two or three criteria for each element:

Design
  • Have the physical access controls been designed in accordance with relevant standards (e.g. PSPF/SCEC) for the type of threat?
  • Is each piece of physical security equipment compliant with an approved design for that type of equipment?
Installation
  • Has each piece of physical security equipment been installed by a licenced installer in accordance with the design/OEM specifications?
  • Have any changes (or additions) to the physical security equipment been subject to change management with reference to the original design?
Function
  • Have security doors been fitted to all operable entry points to the secure area?
  • Have all security doors been fitted with mechanical or logical access control systems to allow only authorised access?

The team needs to develop a performance standard for every critical control (or every group of critical controls) that needs to be audited.  Consideration needs to be given to the intended auditors that will be involved: if we expect the office managers to carry out the audits, our performance standards will need to be significantly less technical than if we had a cyber security expert to conduct all the audits at each office.

Consideration also needs to be given to the criteria used and the likely timeframe of the audit: for example, we are going to audit our physical access controls every day or every week, it is less useful to audit the original design every time.  Our efforts are better focused on the performance issues that can arise in the space of days or weeks (such as the degree of compliance with keeping doors locked at all times).  This leads into the next activity, which involves determining the schedule for cyber security risk assurance.

Cyber Security Assurance framework

To recap on our progress so far, we now have all the key inputs for our assurance program:

  • Identified critical controls (or groups of critical controls)
  • Defined performance standards to audit the critical controls

Now the next step is to determine how we will combine these into a comprehensive cyber security assurance program.  We need to consider:

  • How often critical controls should be audited (frequency)
  • When they should be audited (scheduling, workload, opportunities)
  • Who should carry out the audits (responsibility)
  • How the audits should be carried out (method)

Ideally our critical controls should be audited on a comparable frequency, so that we can use dashboard reporting to review control performance side-by-side.  It is difficult to compare performance when some audit results are several months old and others are never more than a week old.

Having determined the frequency, thought can be given to when controls should be audited.  To some extent this is purely a scheduling activity which takes account of the workloads of available auditors.  Consideration should also be given to the timing of audits with respect to opportunity.  For example, if the business has a large project coming up which will involve many temporary contractors being on site, that presents a great opportunity to audit critical controls under strenuous conditions.  Compliance with access control policies can be audited much more robustly when there are many records to check (e.g. many visitors and temps) rather than when there are very few to check (e.g. a small pool of the same workers each time).

The decision of who should carry out the audits is therefore partly about who is best qualified to do the work, but also who is actually available and has capacity in their workload to carry out the audits.  This is a much simpler job when the audit team is made up of centralised cyber security experts.  Delegating audits to line managers or frontline personnel can be advantageous because it captures real frontline insight, but also presents challenges in terms of training, preparing, and finding time for those workers to get involved.

The method for carrying out audits reflects the frequency, scheduling and responsibility for critical control verification audits.  For example, an audit that is completed once a month by a frontline worker in a day needs to be streamlined and accessible.  An audit that is completed once per year by a internal auditor can be more complex, because there is more time to prepare and plan the audit.  Consideration should also be given to the format that is used for the audit, even down to what type of device they use (e.g. tablets for frontline audits).

The art of building the verification framework comes from striking a balance between all these elements of the assurance program.  The business needs comprehensive coverage using limited resources.  The business also needs quality data to drive good decision-making on cyber security risk.  The business also needs to carry on business with minimal disruption caused by cyber security assurance.  Achieving the balance of all these things may take time.

Sometimes expert partners and consultants can provide wisdom.  Other times, colleagues and other organisations can be a great source of ideas and lessons learned.  Ultimately, though, the journey towards achieving cyber security assurance starts with taking the first leap to get things moving.

The first leap is to build the critical controls, performance standards and verification framework in its first incarnation.  The next leap is to build on the successes and learn from the failures of the first iteration.  The team needs to practice continuous improvement by:

  • Improving the performance of critical controls which are not meeting the standard; and
  • Improving the performance of the cyber security risk assurance program itself in terms of efficiency, effectiveness and fitness-for-purpose.

This step of continuous improvement will be covered in our next article in the series on cyber security assurance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.