Certainty through failure analysis – a revolutionary open-source regulatory approach

A big opportunity is arising from the internet of everything; a better way of running the world via open-source and transparent big-data analysis approaches to regulating developments which affect resources and the environment.

Setting up fair and effective regulatory systems that properly assess a development and then, if its benefits to society and the environment outweigh its impacts and the project is fairly approved, efficiently policing them is actually a very difficult business.

The good news is that a host of new capabilities are here to assist, using data from large monitoring programs and analysing them with emerging big data and open source tools.  These approaches are enabling much more accurate and efficient understandings to be rapidly found to inform approval decisions and post-approval policing for resource projects.  The opportunity now arises to develop new regulatory approaches to harness these powerful new capabilities.

The bad news is that many of the complexities of developing regulatory systems simply can’t be bypassed.  Limited resources can’t be fairly shared unless the resource limits are known, potential impacts are understood and until all stakeholders get to have their say about what impacts are acceptable and understand the views of others.  Technology can greatly assist, for example in making community consultation more efficient, but cannot bypass the societal and administrative elements of regulatory systems.  Understanding the nature of the potential impacts and how these would be viewed by stakeholders and understanding the social, economic and environmental values to the community are two examples of difficulties which must be traversed in developing clever new rules to share scarce resources fairly.

The move from old-fashioned, parliament-driven and administratively complex regulatory systems to flexible, data-centric modern approaches is ultimately inevitable because it is more effective, fairer and more cost efficient.  If this imminent regulatory revolution can be led by the open-source community, it will also serve the public good by engaging appropriately with those who will be affected by the development (stakeholders) and be fully transparent so that decisions are made on evidence rather than political expediency.

There are many, many people working on different parts of this regulatory proto-elephant.  Let me draw your attention to two of them, as they are working near what I think might turn out to be the trunk, or at least the front(ish)…

Many environmental specialists will know of Dr John Doherty or, if not him, then his brainchild – the PEST model which is now used almost universally to calibrate complex environmental models.  Dr Doherty is a hydrogeologist with an incredible mind for numerical modelling, but he’s not happy about where it’s going. His master’s view has driven him to call out the modern numerical model’s big paradox – that they’re wonderful tools which have been hijacked by specialist practitioners to blind and hide, rather than reveal, likely impacts.  These models may well be needed because they can help answer questions about how a complex environment might behave if a certain action is taken or development is imposed.  But rather than being used to inform decision makers and the communities they represent, the models are being presented as if they are the end-products themselves, and are not answering the right questions.  This has in most cases arisen from limited competence rather than deliberate intent, but the result needs fixing regardless.

So Dr Doherty and his colleague Dr Catherine Moore are leading a quiet revolution. They’ve been peeling back the tentacles of model complexity to distil a strategy for setting up models to assist, rather than hinder, proponents and regulators to make the decisions they need to make.  The theoretical and empirical basis of their new approach are available for scrutiny at https://www.gns.cri.nz/gns/content/download/12756/67966/file/Simple%20is%20beautifulv3.pdf.

There’s lots of big ideas and much detail in their many papers (see refs in linked document above), so let me try to pick out two key points from Doherty and Moore’s evolving grand plan to analyse and regulate environmental and development impacts:

  • With some effort, environmental models can be used to implement the Scientific Method. This is done by posing a “bad thing” hypothesis and asking the model to reject it on the basis of its predicted incompatibility with information about the system that is encapsulated in the model.  In other words, the first step in making an assessment about whether a development’s impacts will be acceptable is to define a threshold of unacceptability, the bad thing that stakeholders don’t want to happen if the development proceeds.  For example, if a proposed mine will cause flows in a particular stream to drop below a point where a particular ecosystem will be impaired, an unacceptability flow threshold can be agreed with stakeholders and set as the definition of development failure.
  • The next step is to set up the predictive analysis, using the simplest adequate model or other analytical technique, to answer the question of whether the bad thing/failure will occur and, if not, with what level of certainty is the prediction made?  This leads to an outcome where the proponent is required to clearly set out the likelihood that the bad thing will happen, and the certainty with which the prediction is made.  This provides the decision-makers and the community behind them with the ideal tool to understand what is most likely to happen if the development is approved, and what the risk is that the bad thing might still happen.  If the analyses are done using the approach advocated by Drs Doherty and Moore, i.e. set up to purely to inform environmental decision-making, they become the best available tool.

I believe that the Doherty and Moore approach can and will revolutionise environmental modelling analysis to make it fair, transparent and using the most suitable tools for the job.  If all such assessments start with a negotiated definition of failure, or in most real-world situations a set of thresholds defining failure and levels of concern beneath that, we suddenly have a new solid basis for setting specifications for any model that is built to support environmental decision-making.

I know it doesn’t sound very revolutionary or sexy, but compared to the current paradigm of regulators who can’t understand whether their concerns have been addressed and obstructive or non-definitive analysis of these concerns by modellers, it’s absolutely profound.

Advertisement

Author: glassearth

Looking hard at how we can share resources sustainably through better regulation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: