The (In)Completeness of Modelling
I’ve maintained a strong interest in climate change since the debate started. The logic connecting the effects of burning the concentrated biomass from roughly 1.3 billion years of springs, summers, autumns and winters to the warming of the earth, seemed to me, as an overworked composter of garden clippings, as refreshingly simple. The distance between the voices in this debate, then, has been, in contrast, confusing.
Then along comes something like this fascinating article, introducing a potentially major and overlooked factor in global climate modelling. Reflecting on the few hundred thousand risk bowties stored on our Amazon servers that contain our clients’ best view of their specific process safety, environmental, financial, asset integrity, reputational and many other risks, and upon which they base their critical control verification and audit activities, I wondered if they too might suffer from a major and overlooked factor in their modelling.
Being a whiteboarder, too, I wanted to come up with something that reflected my understanding of how my mental modelling worked and how it might come to the fore when building a bowtie, for example, and came up with this:
It is trying to show the layering of interpretation that exists between the the reality of a situation and how that reality gets implemented, in this case, in a bowtie.
– Reality needs first to be processed by our individual mental model, so the effects of bias will inevitably drive some clipping here, some assumptions there.
– Then comes the Context model. I think this is the representation of our mental model, with all it’s divergences and methodological constraints as they apply to a specific context. The reason why I think this is necessary is because when you try applying an idea in different contexts, it inevitably adapts to the context, and well, things never turn out how we think they will. And there may be many contexts too, making the model even less precise.
– Then comes the Group model. This is the context appropriate model that is then popped into the forge with the models of our peers, resulting in something further abstracted, but one that is, hopefully, validated by others.
– Then comes the Implementable model. This is about how the group’s model is made manifest and can act to implement some change which is largely directed at influencing reality, albeit that it now adds biases related to budget, technology, resources, timing and what have you.
I am sure there are better and more succinct models available (feedback loops will be added for any version 2) but having done this has helped reinforce the following:
– setting the context comprehensively and aligning expectations within it
– spending time to get everyone on the same page and not to expect that they should just get it (change management can play a critical role here)
– reminding oneself that glorious ideas only evolve into glorious reality with the regular application of knowledge and shared reflection
To finish, I am reminded of Moore’s Law, somewhat pompously stating that CPU power will double every 2 years. What was really being said was “Every 2 years we are going to solve some more of the problems we identified in the last CPU to enable the next one to go faster”. This is what we should be doing too.