Setting up Base Controls – the comprehensive guide to risk control setup – Part 2

Part 1 of this series of articles looked at the concepts and general approach to smart base control setup.  If you’ve made it here, you’re ready to take those ideas and start making them work with your real-world approach.

We start off by using the context and background information about controls to understand how the controls actually work.

Read the context and usage of the control carefully

Sometimes controls are wearing a cunning disguise.  Part 1 of this series used the example of a pre-start check risk control, which can sometimes be defined in terms of the procedure that relates to it.  The procedure itself (and all the other provisions) isn’t really the fundamental control.  It’s the specific function of the pre-start that does the work.  The only way to get past that disguise is to understand the context and usage of the control fully.  Sometimes the naming of controls can be misleading in a way that the original risk assessment team didn’t intend.

The pre-start example given above is a good one because some users like to link back to the relevant procedure for the naming of the control.  The problem here is that procedures are often written to be as inclusive as possible, meaning that there are sections duplicated across different procedures.  For example, there will often be different procedures for starting up equipment (return to service / recommissioning), using cranes for lifting tasks, and working in confined spaces.  But it is likely that each procedure will include a section about assessing the risks prior to starting work.  If the risk assessment process used is fundamentally the same, we have a common control relating to identification of hazards and management of risk by frontline workers.  But other facilitators may well have named each of these controls using the specific procedure for that job.

If this sounds familiar, then there will be some detective work involved in setting up your base controls.  You need to push past the procedural references and determine what factors are really in play.  Sometimes you can infer this from the wording of control names, comments, or assessments within RiskView.  Other times you may need to consult with people who know the risk well to distil the controls down.

The potential benefit here is extracting discrete areas of performance from being lumped together in an overall procedure.  If the job risk assessment is audited as a small part of a large audit of the overall procedure, your job risk assessments could be performing poorly across multiple tasks and not be seen if the other areas of the procedure are performing well.  Or if training is audited across specific (limited) areas of competencies, we could miss the bigger trend that could show that our whole training system is failing.  So we might need to pull controls out of an existing control which is too big; or we might need to combine existing controls which are too small.

To do this we need to consider the criteria we are going to use to identify points of commonality.  Control intent or the way that the control is expected to reduce risk is very useful for this purpose.  Extract the key points that define how the control is supposed to work, and use those to try to align risk controls to a base control.  The control effectiveness and hierarchy of controls selection can be used to validate that alignment.

Breaking down the intended control performance then allows us to rebuild the intended performance for our new base control.

The effort you put in here is a win-win.  For example, your detective work finds that each procedure is in fact a distinct control that deserves to be a base control, at least now you have some evidence that you can record to explain that decision in the future.  If you find that there are opportunities to recognise the root critical control which is common to several risks, you’ve identified an opportunity for improvement.  Either result is a win.

Keep things consistent

It is crucial not to lose sight of the original intended level of granularity once you get through identifying a few potential candidate base controls.

The benefit of having a consistent level of granularity is that it makes assurance data more directly comparable (as well as giving auditors a consistent level of detail that they are operating at).

For example, we could look at different levels of granularity for the same control:

Any one of these levels could be about right for a Base Control.  For example, in a large organisation operating across many risk domains, “Height Safety Systems” might be the most granular that the business wants to go.  In a small gas plant, we might need the finer detail of “VESDA fire sensors” rather than the broad “Fire Detection & Automatic Suppression Systems”.

Problems can arise in the final setup if some base controls are more granular than others.  In the above example, it isn’t ideal if we were to have a “Height safety systems” base control working alongside a “VESDA fire sensors” base control.  For our audit team, it would mean that some audits would require only a mobile device, and others would require a tape measure and a step-ladder.  Our dashboard reporting would then be displaying two different types of entity: our VESDA fire sensor is likely an engineering control because it works autonomously of human input, while our fall arrest anchor points are captured as administrative controls because the Height Safety Systems base control relates to a broad human system rather than just the engineered fall prevention devices.

Achieving this consistency is a balancing act that requires bringing together multiple data points to make good decisions.  This can include:

  • The context and usage of individual risk controls, which will give us an indication of whether our new Base Controls will fit with our real-world risk.
  • The intent or intended performance of the control, which defines the fundamental attributes of how it reduces risk.
  • The criteria we would use to assess the performance of the control, which helps us understand what the real-world performance of the control looks like.
  • The hierarchy of controls selection

These criteria help us look at the data from different angles, and then validate or nullify the reasoning behind our proposed Base Controls.

If this is starting to sound all a little too complicated, let’s get back to a real-world example that shows the approach in action.

The overall process: one specific example

We were working with a client in the mining space who had thousands of individual risk controls.  Their planned approach was to transition to critical control verification rapidly, which meant it was important to have around 20-40 critical Base Controls to work with.  Their bowtie analysis had turned out a lot of variation in the granularity of controls, which presented some challenges from inferring where there were opportunities to build good Base Controls.

Here’s the approach we took.

  • Use the advanced analysis pane to extract a list of existing risk controls, existing base controls, and contextual data such as risk event, comments, hierarchy of controls, etc.
  • Sort, filter and work through the data. Identify initial points of commonality by tagging or adding cells of data to help with filtering.  For example, items relating to controls for working at height initially got tagged “height safety” until we could narrow down the level of granularity we were shooting for.
  • Refine some of the initial tagging and start building the list of potential new Base Controls. We used Pivot Tables to look at the data from different angles to challenge our initial decisions.  Some initial decisions, like having “Edge Protection” as a Base Control, did not make the cut.
  • Document the new Base Controls with details of the intended performance (including criteria for assessing performance), plus relevant data such as hierarchy of controls selection.

Control effectiveness was a bit trickier to standardise because we found that some incarnations of the Base Control were stronger than others.  We validated with the client that this accurate, and then we used the “Applicability Factor” on risk controls to replicate the way that real-world uses of the control did not always achieve the maximum effectiveness possible for that control.  A good example is barricading: when the barricading was a permanent, impact-resistant bollard, the effectiveness is quite strong.  When the barricading is a movable traffic barrier with signage, it is too highly reliant on human compliance to be as strong.

Some of the Base Controls we ended up with included:

  • Barricading
  • Emergency Response
  • Fall Arrest & Restraint Systems
  • Fire Detection & Suppression Systems
  • Health Surveillance
  • Training & Competency System
  • Vehicle Safety Standards

Although the naming has some variety (to reflect the type of terminology they were used to), the granularity is set at around the same level.  The focus ended up being on processes and systems, with the occasional engineering control based around the mass implementation of engineered equipment.

If the client had been focusing solely on health and safety risks, we probably would have had more ground-level controls (e.g. Rollover Protection Systems).

Setting up Base Controls is a task that is unique to each business.  Some businesses will have a wealth of knowledge stored in their RiskView site, which makes the setup more of a data analysis task.  Other businesses will be starting from scratch and this will require greater subject matter expertise to get the base controls set up at the right level.

If you need more specific guidance on setting up Base Controls for your business, get in touch with our implementations team (implementation@meercat.com.au) or your regular point of contact at Meercat.

Post a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.