How to use performance standards – 3 helpful insights

Critical control verification relies on good performance standards in order to be effective.  Without some criteria by which to judge the placement and effectiveness of controls, we are unable to provide assurance of the risk reduction provided by those controls.  How do we build good performance standards, and how can we plan ahead to get the most value from our assurance activities?

This is the first in a short series of posts on building effective performance standards.  Here we’ll be focusing on three important insights that can be used to build smarter performance standards:

  • Interplay between performance standards and major accident events.
  • Least effort for best results.
  • Different and complementary types of evidence.

Critical Control Verification and Performance standards

Before looking at these three insights, we need to lay some groundwork.  Critical control verification is the practice of assuring that critical controls are in place and effective in reducing risk.  The goal is to verify the actual status of the control “in the wild”, and provide actionable intelligence to decision-makers where the effectiveness is more or less than was expected.

It’s helpful to start by taking a step back and applying the five whys approach (the “so what?” technique also works well) to the problem.

  • We need to build performance standards for our critical control verifications.
  • Why? Because we need to be able to measure the performance of our critical controls in the field at a point in time.
  • Why? Because we need to be able to track and review the performance of our critical controls over time.
  • Why? Because we need to be able to detect and correct poor performance of our critical controls.
  • Why? Because our critical controls make the biggest difference to preventing or mitigating the impact of major accident events (materially unwanted events).
  • Why? Because those major accident events are a serious threat to the safety of our workers and the viability of our business.

So we’re fundamentally concerned with critical control performance because of the way that it impacts on the business and its people.  This is a helpful thing to bear in mind because it is all too easy to get fixated on minutiae of the control instead of focusing on the essential parts.

Interplay between performance standards and major accident events

Our first stop in building smarter performance standards is to revisit the circumstances in which our controls are critical.  To take a simplistic example, a fall arrest system as a control has a certain set of properties.  The system needs to include a full body harness rated to the right level of force, it needs to be used by competent workers with at least two workers present, and so on.

But those aspects of effectiveness need to be qualified in the context of the risk that we are concerned with.  In this instance, we are probably concerned with the risk of a fatal fall from height.  In that context, which aspects of performance are most important to preventing the initiation or escalation of the event?

Also, what is the scope of that control in that scenario? We might automatically assume that using anchor points rated to the right level of force is an essential performance criterion for the fall arrest system.  But in the scope of the major accident event bowtie, there might be another control relating to design and installation of the anchor points.  By focusing on the control rather than the context in which it is used, we may lose focus of what is most important.

This flows logically into our next insight.

Least effort for best results

This factor is relevant from two perspectives: firstly from the perspective of narrowing the focus of our verification to the most important elements, and secondly from the perspective of maximising the value derived from limited resources.

Here we are looking to define the most important elements of critical control performance without compromising on quality.  In other words: if we can draw the same confident conclusion about the effectiveness of our control in 5 questions instead of 10, that is a potentially valuable outcome.

Let’s take the example of traffic management as a critical control.  Note that this is just an example taken in a particular context.

We could easily identify half a dozen elements of performance around traffic management, including:

  • The extent to which the traffic management plan takes account of all relevant traffic movements and hazards.
  • The extent to which traffic movements are isolated from one another across space or time.
  • The design of traffic interactions using procedural systems or infrastructure.
  • The definition of speed limits for different traffic corridors.
  • The installation of signage to mark speed or traffic flow.
  • The consultation and communication of traffic management protocols to the workforce.

All of these elements of performance appear to be equally essential to the effectiveness of the control, and should be included in the performance standard.  But what if we qualify things by defining our major accident event?

This changes the complexion of our critical control, because we are now dealing with a group of drivers that may change from day to day.  We have less ability to mandate work methods or control the competency level of the workforce, because we have a transient workforce in a shared facility.  If we have limited resources to dedicate to control verification, which elements of performance tell us the most about whether the control is effective? We would probably be focusing more on the plan, the signage and communication.

Focusing on the least effort for best results gives us best use of resources, without compromising on the confidence of the conclusions that we can draw from the data we gather.  Another way we can strengthen those conclusions is to diversify the evidence base that we are relying on.

Different and complementary types of evidence

Ask anyone who has spent time in law enforcement or regulatory roles, you need to rely on good evidence for your conclusions.  The same applies to drawing conclusions about the performance of critical controls.

Evidence is like alloys: a combination of different evidence makes for stronger conclusions, in the same way that the right combination of metals makes for a stronger product.  Many health and safety professionals do this unconsciously.   To design good performance standards, we need to consciously apply the principle.

We should be aiming to collect evidence that provides different angles against the same elements of performance, so that we can be more confident in the conclusions we draw about performance.

To take a simple example, we could quiz a worker on their knowledge of the correct procedure, and observe another worker carrying out the procedure in order to draw a stronger conclusion about whether the procedure is being applied properly.  We could also review the paperwork completed by another worker carrying out that procedure to see if they have followed the same protocol.

Ideally we want to diversify our evidence base without increasing the resource burden on the audit team (and the auditees).  This is where good planning can identify areas for consolidation.  For example, there might be 5 questions that need to be asked of frontline workers, and we could design the performance standard to group those questions together.

Applying this third principle creates a winning triangle for performance standard design: focusing on context (interplay with the MAE), least effort for best return, and complementary forms of evidence.  The overall result should be a performance standard that consistently empowers the risk team to confidently draw conclusions about critical control performance.

Our next post will hone in on the detail of constructing performance standards.  For more on the reporting side of performance standards, check out our article on live critical control management reporting (here).

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.