How to use critical control performance standards for control verification

Critical control verification relies on good performance standards in order to be effective.  In part two of our series on performance standards, we look at how they can be structured and automated to reduce the burden on the frontline risk team.

Here we’ll be focusing specifically on good structure and design of critical control performance standards, following on from our previous post on principles for good performance standards (here).

Critical Control Verification and Performance standards

Critical control verification is the practice of assuring that critical controls are in place and effective in reducing risk.  The goal is to verify the actual status of the control “in the wild”, and provide actionable intelligence to decision-makers where the effectiveness is more or less than was expected.

Critical control performance standards are the precise standards by which performance is judged.  In the case of critical control performance standards, we’re talking about the precise specifications for that control to be providing the expecting risk reduction benefit.

To take a simplistic example, we might be looking at the critical control of a fall arrest system.  The critical control performance standard could cover aspects such as:

  • The fall arrest system is used where a fully-contained work area at height cannot be achieved.
  • The system must use a full body harness rated to withstand the maximum load from a falling person.
  • The system must have lanyards with a secondary locking mechanism.
  • The system must have a shock absorber in place.
  • The system must be used with at least two workers present (i.e. one more than the worker at height).

Each of these criteria has been determined to be essential for the control to be in place and effective.  For example, the control cannot be said to be in place is it is not used wherever a fully-contained work area at height is not available.  The system cannot be said to be effective if it does not incorporate a full body harness rated to the appropriate load.

Designing Critical Control Performance Standards

Building effective performance standards for critical controls requires careful analysis of the essential characteristics of performance.  We need to cover broad elements of performance and specific criteria that can be applied at the frontline.

In RiskView, we separate performance standards into two elements in a hierarchy:

  • Performance elements, which are broad elements of performance.
  • Performance criteria, which are the specific aspects of the performance element that are used to judge performance.

If we judge each control against these performance elements and criteria, we should be able to form a comprehensive view of the effectiveness of the control in practice.

Let’s use a simple example of a critical control performance standard for vehicle safety devices.

PE 1: Vehicles used on site must be fitted with safety-critical equipment.PC 1.1: Light vehicles must be fitted with driver-side and passenger-side airbags.
PC 1.2: Light vehicles must be fitted with a roll cage.
PC 1.3: Light vehicles must be fitted with only front-facing seating.  Each seat must be fitted with a 2- or 3-point inertia seat belt.

The other essential component of the critical control performance standard is a defined metric for measuring performance.  In the above example, the metrics are fairly straightforward.  Either all light vehicles inspected had roll cages, or they did not.

Other types of performance criteria might have a range of different responses.  Worker competency might be rated as high/medium/low.  The condition of a barrier might be rated as 25%, 50%, 75% or 100%.

In RiskView, all of these aspects of the critical control performance standards are taken into the template.  It’s important to make sure that the performance elements and performance criteria are directly applicable to the effectiveness of the critical control.

Each performance standard in RiskView can be mapped to a critical control.  When we generate a critical control verification activity against that critical control, we can select the performance standards the basis of the audit.  RiskView will search out where the critical control appears (i.e. which sites, which areas of work/risk), and create an audit with the critical control and performance standard both populated.

It can be a useful exercise to add in guidance notes for each performance criteria.  This text will appear in the audit to clarify precisely what the auditor should be looking for.  We could also use this to specify quantities to be inspected (e.g. 10% of training records for frontline personnel).

RiskView also allows us to allocate a weighting to individual performance criteria.  The overall score for the activity will ordinarily be calculated with each performance criteria having an equal score (e.g. if 1 criteria out of 10 does not meet the standard, the overall score will be 90%).  We can weight the most critical performance criteria so that the overall score reflects the varying importance of different criteria.

Planning ahead for control effectiveness reporting valuable data

One of the challenges of building critical control performance standards is planning ahead.  The performance standards used will strongly shape our critical control performance reporting.  This will affect our overall compliance performance, and the overall risk exposure that gets reported to the executive team.  The findings of critical control verification will ultimate guide decision-making on risk, so we need to get the best data possible from assurance.

Therefore we need to be careful to design good performance standards.

In RiskView, for example, the software offers some powerful data aggregation capabilities.  From the dashboard we can access reporting on risk exposure and control effectiveness across the business.

Verification findings are summarised as a numerical score, which is based on the responses to each performance criteria (and the weighting, if any).  To make sure that the data seen by everyone on the dashboard accurately reflects the reality, we need to make sure that each performance criteria is relevant, correctly measured, and appropriately weighted (if necessary).

This might call for some validation of the performance standards using real-world data.  This should quickly identify issues with the criteria responses or the overall weighting.  This can be done using desktop audits or by scheduling an in-field validation alongside other planned verification activity.

RiskView provides an aggregated view of the data with drill-down to examine responses to individual criteria.  It is important that the number of performance elements and performance criteria reflects the significance of each aspect of performance.  If this cannot be achieved by re-designing the questions to be asked, we can assign a weighting to particular criteria.

This works well if we have a performance element which is simple to verify but vitally important to our confidence in the effectiveness of the control.  To build on the earlier example of vehicle safety devices:

PE 1: Vehicles used on site must be fitted with safety-critical equipment.PC 1.1: Light vehicles must be fitted with driver-side and passenger-side airbags.
PC 1.2: Light vehicles must be fitted with a roll cage.
PC 1.3: Light vehicles must be fitted with only front-facing seating.  Each seat must be fitted with a 2- or 3-point inertia seat belt.
PE 2: Vehicles must be fit for purpose given their likely operating conditions.PC 2.1 Light vehicles must be of mine specification with wheel base and suspension suited to both on-road and off-road driving.

Performance element 1 has 3 criteria, each of which is a separate question relating to different aspects of design.  Performance element 2 is clearly equally important, because the specific safety features (such as a roll cage) need to be complemented by a design selection which controls the manner in which a rollover may take place.  This would be a good example of a situation where the second performance element might be weighted so as to equal the audit score contribution of the first performance element.

This is important given that our dashboard reporting will aggregate the data to present us with a digestible report.  When we see a green result against the critical control, we need to be confident that that result reflects a strong result across the range of expected performance.

Following this guidance should provide some direction on how to get started on your critical control performance standards.  The ICMM also provides some guidance in its critical control management good practice guide (found here).  If you’re using a platform like RiskView, the system will actually provide you with the template structures to help guide the process.

The next step to take after drafting the performance standard is to validate the design.  Your first audit will validate that the standard works and supports good auditing practice.  So take it out in the field and get started!

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.