Featured

The turn Pete took…

Now recruiting environmental genii for the H2onestly genie bottle

Hello to my legion of blog readers! I hope you’re both well!

But this time it’s not just bad but grimly honest japes, I’ve also got some news – since my last post, I’ve quit my job as a government regulator, began a nerdy PhD with UTS in Sydney, and kicked off a brand new, equally nerdy consultancy, H2onestly.

All of that happened last week, so in the second week of course I blogged about it.

What I’m proposing to do for the PhD is to trial collaborative modelling within a planning approval process for a new mine.  If it works it will give communities a much better understanding and a real say in how and whether major new developments are developed, while at the same time greatly improving the way the proposals are investigated, assessed and designed. I’ll post more on the PhD moonshot when blog energies permit.

In the madly missing meantime, I will finally be launching the new boutique consultancy I’ve dreamed of for so long, H2onestly, to fund my wilful ways.  The offering is very specific, we want to help enviro-regulatory agencies do their jobs better with big-data, better analysis and transparent decision-support platforms.   There’s a lot of low-hanging improvements available and needed, and I am hoping that my extensive connections in the Australian environmental regulatory world can help make it happen.

I’m writing this blog because H2onestly will first of all be hunting for paying regulatory improvement projects and once I’ve found these, I will need help in completing them.

In some cases I will be asking for advice on how to approach issues that you might be more familiar with or in others it might be invitations to actually complete a project or a task on a freelance basis.  I’m essentially trying to build an ecosystem of clever, ethical environmental analysts who are keen to work in teams to help regulators do their jobs better and make better decisions about how our remaining resources are allocated.  I’ll then broker and package these expert services to the regulatory agencies, most of whom need us but don’t know it yet – it will not be linear success, but it should be fun.

So I’m reaching out now to get an idea of who is at least hypothetically interested in this kind of work/approach. I’m now actively looking for an online tribe of freelancers who want to work with me to do clever analysis for the (subjectively) good guys, and get paid for it.  If this sounds like you, please comment below (I only think that you have to sign in when you do, because I understand technology too little to be sure, but please persist) with some details and I’ll get back to you.

I’m keen to meet super physics and social scientists, legal stalwarts, policy wonks, really any skilled analysts with expertise in environmental, community relations and regulatory sectors.

The key qualification that I’m looking for is a demonstrated high standard of ethics, leaning on the side of the planet.

I would also greatly appreciate word of mouth to any like-minded souls, friends or associates which fit the bill, those that alight in your consciousness when you ponder the possibilities. We are looking for kindred professional souls, so please do spread this link with discernment.

If there’s enough interest voiced I’ll set up a private chat room where we can develop the ideas some more and hopefully start finding real projects in the coming months.

Pete’s Company

UTS project

Advertisement
Featured

The H2Onestly raison d’etre

This is the post excerpt.

I am a passionate tech-head trying to solve assessment and regulatory puzzles from a privileged and influential position inside the system.  There are many challenges and regulation of the environment is not currently being done well.  This blog will be all about finding better ways to regulate development, with good science, big data, integrity, transparency and factual honesty.  I welcome your feedback and thoughts.

post

A pathway from open stakeholder engagement to fair decision-making

I look around the world right now and, with my regulatorily-minded geek-shaded superglasses on, I see a great need for new, effective and resource-efficient way to “do” stakeholder engagement on which sound decisions can be made.  If we can make it more focussed, fairer and efficient then we would be able to reasonably insist on stakeholder engagement being fully incorporated into the planning process.

Communities recognise that new developments and policies will bring change, and that the change is likely to have both positive (jobs, royalties) and negative (often social and environmental) consequences.  My collaborators and I believe proponents can and should work together with regulators and communities to make decisions on whether proposals should be approved, reaching consensus decisions with both legal and social licence to operate.

The advantage of what we are tentatively calling the “Social Licence Assessment Pathway” to developers is that proposals which are approved through this route will be accurately understood and likely to be much better accepted by regulators and the community.  Further, the proponent will gain full clarity over stakeholder concerns, and any approvals granted will include conditions which clearly spell out the regulator’s expectations with agreed performance measures.

It is appropriate that government regulators and all other stakeholders should understand the likely outcomes of a proposed change and be able to weigh them up according to the prevailing laws and policies to which the proposal is subjected.  Emerging technologies enable the planning process to achieve social as well as regulatory licence through making the process transparent and by employing the best collaborative analytical tools and methods.

Collaborative decision support tools have been used with ever-improving success in a variety of resource-planning (https://www.ncbi.nlm.nih.gov/pubmed/26429362, https://www.udall.gov/OurPrograms/Institute/Institute.aspx) situations.  I believe that the time has come for us to incorporate these approaches into the planning process for large, complex and particularly for contentious, resource-sharing or environment-changing projects.

I don’t have all the answers but some remarkable colleagues and I do have a vision of a truly collaborative and holistic planning process which harnesses the power of modern analytical techniques, collaborative software and the transparency of web-hosted information platforms.  The following is an outline of how our suggested new paradigm for stakeholder engagement process might work, including some exciting new innovations (ad alert, techy promotions ahead) which can help to facilitate it.

Our big idea has a number of novel elements.  The first of these is an independent information platform where important proposal data is held and analysis methodologies can be examined. By providing calibrated access to open-source data and open-script analysis tools, stakeholders can quickly confirm that the conceptual understandings make sense, identify the key risks from the proposal, the range of plausible outcomes, and what might drive the possible outcomes towards best and worst-case scenarios.

The second important innovation which would facilitate the Social Licence Pathway is the introduction of collaborative decision-support tools for deep and efficient stakeholder engagement.  For this purpose, we are advocating the use of “management flight simulators” as developed by my colleague Juan Castilla Rho [link to Groundwater Modelling with Stakeholders: Finding the Complexity that Matters, Vol. 55, No. 5–Groundwater–September-October 2017].  These “simulators” incorporate agent-based modelling (ABM) to enable workshop attendees, even remote ones, to see what happens to modelled prediction outcomes if various parameters or model elements are varied.  They have been found to be an extremely powerful means for engaging and informing stakeholders about what parameters are important to understand and measure in relation to their particular concerns [weblink to Juan’s Chilean projects].

We propose to couple these technologies with an agreed team of independent expert intermediaries to design and implement a transparent impact modelling process using the best available technologies.  This is the third and probably the most radical element of the Social Licence Assessment process.  We envisage that this analytical group would be engaged by the regulator following competitive tendering and agreement for funding from the proponent. This independent analysis will facilitate clear and consistent decision-making by the regulators which can be understood and accepted by the proponent and the community.

We showcase below how these exciting technologies might feed into a suitable development assessment process, but in reality the process is not known until it has been lived.  It will be a learning journey and will need to be modified as each trial project assessment traverses the Social Licence Planning Pathway.  We are confident that the paradigm is feasible, but we do not underestimate the many complexities and difficulties which will be met as the trial implementation commences.

Phase 1

The government receives a proposal and decides that it is sufficiently important, controversial and suited to assessment through the Social Licence Pathway.  An initial cost estimate for the process is made and funds are sought from the proponent and/or any other suitable source.

Phase 2

The community, represented by interested NGOs and appropriate representatives, sit down with the proponent, regulatory and advisory agencies and the nominated modelling team to decide on:

  1. Terms of reference, scope of analysis (e.g. are cumulative impacts, global warming, socio-economics in or out of scope?)
  2. What baseline data are critical and what are important to be collected and provided on an agreed data platform, accessible to all participants?
  3. What are the key concerns that need to be addressed by the analysis?
  4. What mitigations might reasonably be expected to be effective in containing environmental and/or social consequences?

The purpose of this initial exercise is primarily to consider what parameters will be most important to the analysis and to identify quantitative thresholds of “oh-oh” and “development failure”.  Quantification of thresholds is difficult but is very powerful in this context because it allows modelling objectives to be tied to statistical certainty estimations, as advocated by modelling legends John Doherty and Catherine Moore (https://www.gns.cri.nz/Home/Our-Science/Environment-and-Materials/Groundwater/Research-Programmes/Smart-Aquifer-Models-for-Aquifer-Management-SAM/SAM-discussion-paper),agreed with stakeholders.

We suggest that this could be done efficiently through “management flight simulators” as advocated by Juan Castilla Rho [weblink].  This technology has the capacity to make stakeholder engagement both more effectively and more rapidly identifying what matters to stakeholders, numerically.

Phase 3

An initial set of baseline data is fed by the proponent into the transparent data platform developed by the modelling team. One possible prototype for this is the open-source “geonode platform”, which is apparently a thing now. Examples of similar platforms in production include those developed by my other collaborator David Kennewell at Hydrata to manage groundwater in Nicaragua and Haiti.

At the same time, the modelling team considers the various inputs from Phase 2 and develops a Further Investigation and Analysis/Modelling Plan.  This Plan includes “data worth” analyses to identify critical data points that will be needed to ensure that the analysis meets agreed certainty benchmarks.  The plan is published on the platform and feedback is sought from proponent and stakeholders.  This feedback is considered by the agencies and modelling team and the Plan is amended as appropriate.

Phase 4

A single or several set of agreed additional investigations are completed by the proponent, and the new datasets are fed into the Platform.

Phase 5

The modellers perform the prediction analyses in accordance with the Plan, periodically reporting on progress and consulting with proponents and stakeholders in a set forum and amending or extending analyses if required by the forum.

Phase 6

Modelling results are published, including clear statements about the levels of certainty and assumptions made during the modelling.  A final phase of consultation is undertaken to confirm stakeholders’ interpretations of the modelling results.

Phase 7

Responsible agencies make their decision about the proposal, explaining how their decision was arrived at in light of the transparent modelling results.

Phase 8

If the authorities approve the proposed development, the agreed consequence thresholds used to guide the predictive modelling will be translated into approval conditions, and measured and reported in a way that can be progressively ingested and displayed on the information platform created during the assessment.

Looking under a model’s hood

One of the biggest difficulties for a regulator in assessing the accuracy of groundwater model predictions is that the models themselves are generally not made available and in any case cannot be readily examined by non-expert modellers.  Instead, regulators are presented with a report about the model and its outcomes, but these reports often do not accurately convey all of the important information about how the model has been constructed, how key assumptions have been made and what uncertainties remain.  It is too easy and too tempting to hide a model’s weak points or a modeller’s uncertainty about how to simulate a groundwater regime by omission in a report if you are confident that the model will not be examined.

This situation is somewhat improved if the model has been independently peer-reviewed as recommended by the Australian Groundwater Modelling Guidelines (Barnett et al, 2013) and their equivalents in other countries, but as a regulator I have been frequently disappointed by the tick-box quality of these peer reviews.  I’m aware of numerous instances where professional peer-reviews have been dutifully undertaken in models that later turned out to be very poor for their purpose.

One solution to this impasse is for the model report to be accompanied by a set of exported model layers and other information in a standardised electronic format that would enable the regulator (or any other well-informed stakeholder) to visualise how the model has been constructed and whether key outputs appear sensible.  The model layers and information would sometimes need to be tailored depending on the decision(s) that the model is seeking to inform, but it is still possible to nominate a “routine” set of model layers to export as a starting point.

I’m not sure why this hasn’t been attempted as far as I’m aware, but one reason is how much format matters.  The format for these export files needs to be sensibly restricted to enable the regulator/ stakeholder to import them into GIS-compatible visualisation software, such as xxx.  I have had some experts provide some guidance on formatting requirements and will make that available to anyone interested by email, or maybe in a future post if I can get the advice debadged.

An informed observer can quickly learn a lot from the layers and information set out below.  By importing the required layers into visualisation software, one can see for example how aquifer recharge has been spatially interpreted and whether this makes sense in view of the known topography and geology.

The following is a list of my nominations for the “Minimum Model Export” list which I hope one day to get into government regulations – they can of course be used by anyone and can be adjusted for local/regulatory/stakeholder needs.

The list is broken into two tables below; Table 1 contains the metadata and information that can be better communicated outside a graphical environment, whilst Table 2 lists those attributes that are better understood in a spatial context, and are able to be imported into a groundwater visualisation package.

 

Table 1. Standard model information to be provided in electronic text or spreadsheet files

Property/ Parameter Description
Model Objectives A clear statement of the purpose of the model, as discussed in the Australian Modelling Guidelines (Barnett et al, 2012).
Model Metadata Single text file that stores the high level model metadata as a .json object file in accordance with formatting requirements (separately stipulated).  Any number of attributes may be specified in this file, e.g. model/run name, created date, author etc.
Grid Definition Multiple text files exported as .json files in accordance with formatting instructions , describing the 3D grid used by the model and its position in space.
Groundwater Pumping/Extraction Rates Applied Excel table summarising predicted groundwater pumping or other discharge rates applied in model
Scenario Properties Excel table explaining variables used in each scenario presented in report.
Receptors Excel table summarising locations and types of key receptors considered in modelling
Water Budget Excel table summarising key volumetric flows (water budget components) across the model space at key time slices*, both within model domain and through external boundaries.

 

Table 2 presents the layers and parameters which could be routinely exported from the model.

Table 2. Minimum model layers and parameters to be exported from model

Property Parameter Description
Stratigraphy 3D representation of key stratigraphic layers (with labels based on Geoscience Australia stratigraphic names)
Hydraulic Properties Key hydraulic properties generated following calibration, specifically vertical and horizontal hydraulic conductivity (Kv & Kh), storativity [s] at key time slices*.
Rainfall & Evapotrans A map summarising annual rainfall and evapostranspiration rates if areally varied.
G’water Recharge Map layer showing inferred recharge rates and spatio-temporal variability (if any applied) for shallowest aquifer for each main time step*.
Borehole Data Stratigraphic or lithological logs and temporal water level measurements of key monitoring wells (see Tab C for format requirements).
Structural Geology Major faults, shear zones or joint sets incorporated into model (if any applied).
Topography Surface topography (presented as a digital elevation model)
Boundary Conditions Graphical representation of the boundary conditions applied to each edge of the model.
Surface Water Features Stream/lake bottom elevations and heads, streambed conductance.
Surface Water Interaction Fluxes at base of stream/lake/drain features at key time slices*.
Receptors Map layer showing location of main receptors, with legend denoting receptor types.
Predicted Heads Separate contoured layers showing groundwater (and surface water if relevant) heads at pre-mining and current conditions, and predicted at key time slices*.
Certainty Level of model certainty/confidence associated with spatial data.  As described in Section 8.5.7 and Figure 8-6 of the Australian Groundwater Modelling Guidelines (Barnett et al, 2012), one option is to present certainty as a colour density or by varying the transparency of a layer to indicate the level of uncertainty.

* Note 

Where the parameters are requested in terms of key time slices, the default set of events for which those parameter should be provided are as follows:

  1. Predicted conditions at commencement of the activity
  2. Predicted conditions at “peak impact” for that parameter
  3. Predicted conditions at completion of the activity

Predicted conditions once steady-state water conditions have returned following project closure.

The Ferre’ DIRECTion

So, in the last blog I was talking up Ty Ferre and his cohort of bright young groundwater programmers who have completely taken out the September/October 2017 version of the National Groundwater Association’s journal “Groundwater”.

In this post I’d like to start highlighting some of the clever, radical and creative ideas laid out in this remarkable journal edition – remarkable in how these papers are all passionately devoted to making the world better through what their writers know, i.e. how to better analyse environmental information and better understand stakeholder priorities when assessing complex environmental situations.  Unless you have a subscription to the Groundwater magazine however, you can’t freely access them and that’s a crying shame as they’re so good.  Fear not good reader, I’ll paraphrase some for you!

We’ll start with the orchestrator, the remarkable Professor Ty Ferre’.  Ty, if I may call him that as we’ve barely met, sets out to draw new maps between data, models and decision-making.  Quite a lot of what he presents in this paper builds on the DIRECT paper he co-authored in 2015 (https://darcylecture2016.files.wordpress.com/2015/08/100-kikuchi-et-al-2015.pdf), but this new paper is pitched at a much higher level, meaning it can be understood by most scientists and motivated others.

Prof Ferre’ starts by stating the important role that science, and scientific modelling, can and should play in development or resource decision-making – this is what we technocrats can offer but we must recognise our responsibilities to society and the environment, not just to their client.

To facilitate the stakeholder engagement, Ty makes the interesting suggestion that each stakeholder should have the opportunity to have their position heard and converted to an admittedly biased “advocacy model” – a model that seeks to codify each key stakeholder’s concerns and fears with respect to what might happen if this particular development or action were to be approved.  For example, a farmer concerned that his well might dry would have the opportunity to have the modelling team focus on all the ways the development might affect water levels in his/her bore in worst-case scenarios.

At the same time, the developer would ask the modelers to codify his/her biggest concerns, e.g. the proposed dewatering program might be too ineffective/expensive/slow or whatever.  These biased “advocacy models” could be tested and compared against an ensemble of other, perhaps more neutral but realistically possible, models – what Ty refers to as a team of rival models, which help the modeler to creatively explore uncertainties that include their being wrong about one or more conceptual aspects.

Let me quote a seminal paragraph and hope not to run foul of copyright nonsense:

“Advocacy models represent stakeholders’ initial interests and concerns within the model ensemble. Therefore, it is critical that scientists avoid the temptation to build models that discount stakeholders’ concerns. Rather, we should act as honest brokers (Pielke, 2007) of these competing narratives by formulating the most scientifically defensible representations of stakeholders’ concerns. In fact, the act of constructing advocacy models encourages scientists to sample the most consequential regions of model space. In other words, seeking advocacy models nudges hydrologists to abandon the false premise that a single scientific model, no matter how well it matches the existing data, will override stakeholders’ concerns.”  Ferre, 2017

I’ll be returning to that theme in the future about modelers playing the role of honest brokers.  I encourage all regulatory systems in the world to tilt their levers towards encouraging modelers and other advisers to that ethic, for example by levying assessment fees and engaging specialist peer review directly by the regulator. This is exactly the model being used by the NSW State Government in their assessment of Santos’ CSG project at Narrabri and the Hume Coal project in the Southern Highlands.

Ty canvasses a lot of other big ideas in his paper, and notes how the other papers in this journal explores a range of synergistic themes.  Hopefully I’ll find time soon to write about those other papers, many of which have also blown my tiny regulatory mind.

Has anyone out there been similarly pleased to see this particular Groundwater issue and to know there are these kinds of people engaged for the earth’s protection?  Please, I’m keen to hear.

The VERY BIG METAPHOR modelling news

OMG!

Okay, I know that not everyone’s going to be as excited about this as I am, but a VERY BIG METAPHOR (insert tsunami, earthquake, magmatic explosion or news that Taylor Swift is dating Kim Jong Un) just happened in the world of using models to better inform resource decision-making.

Sorry, I need to go back a bit to try to build up some sense of occasion.  Ty Ferre’ is a Professor at the mathematically prestigious University of Arizona.  Last year I had the privilege of attending one of the 123 lectures he presented on a world tour (the fabulous Darcy Lectures, TED for hydrogeologists – https://darcylecture2016.wordpress.com/2015/08/21/references/) about a new modelling approach he and his students have developed, called DIRECT: Discrimination-Inference to Reduce Expected Costs Technique.  I know, sexy right?

The first thing that wowed me when I went to Ty’s Darcy Lecture was the following haiku:

Our data are sparse;
our models are incomplete;
but, we must decide.

Amazeballs.  That is everything this blog is about, but better said in 12 words.

Ty is a humble and lovely giant in this field, and what he and his co-authors presented in their 2015 paper (https://darcylecture2016.files.wordpress.com/2015/08/100-kikuchi-et-al-2015.pdf) is absolutely massive.  The downside is that it’s a pretty complicated approach and then it has a bunch of massively difficult sub-components, but for those who can even half-follow it and potentially have the resources to attempt it, it’s ground-breaking.  It involves using potentially large numbers of “rival models” to enable the uncertainties arising from the modelling to be investigated and quantified using statistical and analytical tools.  Another key aspect is that the ensemble of models can be used to identify which parameters matter (the ones that discriminate are the ones to watch), and what investigations would deliver the most bang for their buck in supporting the decision-making.

The DIRECT approach really should be an option for the assessment of all large underground developments in order that we would truly have a good understanding of what will really happen if approved.  Unfortunately, it’s so far from the current reality where some developers are still griping that they even have to pay for one model, let alone a hundred and with stakeholder consultation on it all to boot.  We’ve got some hard yards to travel before we reach such a fabulous benchmark for modelling that gives enough information for regulators need to make good decisions, but it gives us a northpoint to head towards.

So, that was last year.  This year Ty has wowed me again.  Sometime over the 18 months, no doubt whilst on and after his Darcy Lectures, he’s been gathering smart young hydrogeologists to his multi-model DIRECT flag.  So here at last is the Big Reveal!

The September/October 2017 version of the National Groundwater Association’s journal Groundwater has been hijacked by Ty and his modelling zealots – Ty has written the unifying front paper and there are another 17 papers which explore aspects of Ty’s themes.  The key talking point throughout is – how do we make modelling give the information that regulators and other stakeholders need to make good decisions?  My favorite subject in the world at the moment, hence all the breathless gush.

Not convinced that the earth has shattered with the news?   Not even the biggest reveal since this morning’s Coco Pops box?  Well OK, maybe there’s not as many regulatorly-analytically nerdy types out there as I’d like to think but believe me if you’ve bothered to read this far, it’s a big step forward in making better informed decisions about the resources we have left.

If you are as excited as I am, and that’s the sad and lonely hope of most bloggers I guess, then Ty’s paper is at

http://onlinelibrary.wiley.com/doi/10.1111/gwat.2017.55.issue-5/issuetoc;jsessionid=FF342A544D3F45407F762426009F0D0C.f03t03.

Unfortunately, you need to subscribe to NGWA to access them, as I did when I found out about this special edition.  For those less thrilled, committed or arsed, my next post (The Ferre’ DIRECTion) will aim to summarise a bunch of the ideas which ooze from their pages…

How technology can help us assess a complex world

I am in an interesting place at the moment.  I make a living trying to solve assessment and regulatory puzzles from a privileged and influential position inside the system, hence my anonymity.  I work for an agency that’s trying to preserve some beautiful catchments from underground mining which is spreading beneath them, and my main job is to work out how much harm this is causing.  I am constantly trying to work out how a currently ineffective monitoring, assessment and regulatory system can be improved so that the true extent of impacts can be understood and predicted, and I’m hoping that might interest others and start conversations.  Hence the blog, and my great curiosity about how others think it can be done better too.

Just to be clear, I believe that coal mining or any other development which benefits mankind is OK (acknowledging that my logical ethos breaks down where greenhouse effects make coal irredeemably unsustainable), provided the pros and cons have been properly understood and weighed in accordance with appropriate laws of the land.  And that impacts on the environment are honestly portrayed and understood – we’re the only species that gets to vote so we have to be fair to the others.  I think it’s inevitable and right (or not too wrong) that we do what we do to help ourselves prosper – sustainably, healthily and ever more comfortably.

So I’m truly not anti-development or a revolutionary,  but I devoutly believe that development should proceed only when all the relevant information and best available knowledge  and science are employed to know what will happen if that development does go ahead.

One thing I’ve discovered in this headspace is that technology is massively improving our ability to understand and predict development impacts, but the regulators and regulatory systems we use are not yet agile enough to harness the new know-how.  We regulators need to catch up to do our jobs as well as our communities and environment deserve.

Let me open up the toolbox and give you some examples of the ways the new open-source, big data, numerical modelling and a raft of other tools are enabling us to understand the complexity of nature and how it will be affected by particular actions and interventions:

  • Open Source Environmental Data and Assessment – There are so many parts to this, and so many of them have huge potential to radically improve the way resources are shared. If the stars align and we the liberal government technocrats get to set this up on behalf of the communities we serve, all environmental monitoring data gathered by industry (proponents and monitored operations),  government (all levels) and citizen scientists with smartphones will ultimately be made available and transparent.  It will all go up on government servers and will be accessible to all.

    This means for example that when a proposal is made, the full set of supporting information and data will be available for examination by the regulators and, if they are so inclined, the community.  Modelled predictions made by the proponents can then be independently checked and, if sound, used to nominate specific criteria at specific monitoring locations which should not, and others which must not, be exceeded if the impacts are to stay within the agreed limits.Many new regulations in the financial sector (a field termed RegTech), addressing taxation avoidance and money-laundering for example, are using this approach to enable regulators to identify and monitor key transactions in real time and through this transparency are keeping banks and players honest(ish).
    The opportunity to expand an open-source approach to environmental regulation now awaits.  True, there have been partially successful attempts to use planning conditions and regulatory decision support systems in this way by requiring development impacts to be limited to predictions; the difference here will be with the quality of the analysis and the transparency of the policing systems if the development is approved.  If specific criteria are exceeded specific responses, up to and including stopping and rehabilitating the development, will be enforced.

  • Agent-based modelling to try out the most effective regulatory strategies – set up a simulated world with agreed knowledge, assumptions and rules and then watch as the most efficient way to preserve resources whilst maximising benefits is revealed by the simulated agents. You can even turn these models into interactive games to encourage community participation and feedback.
  • Open-source analysis tools – Use the GitHub, MatLab and other techy ecosystem approaches to finding the best analytical code and algorithms by developing and sharing them on the web. These big data, open-source technologies are literally advancing at the speed of thought, and advances in analysis of environmental data are constantly progressing and evolving.
  • Simple-as-possible modelling – Although it may seem counter-intuitive, the best regulations are built on the simplest appropriate analysis. Technical advances are driving us always towards ever greater complexity, yet the best regulatory systems are simple enough that the proponent, regulators and communities can clearly understand the analysis and science underpinning them.  In many cases for example, it may be better to apply simple statistical analysis to discern patterns and trends than to use “black-box” models which give you an easy answer of uncheckable accuracy.

The power of these kinds of data-enabled smart regulatory system tools is immense due to their agility, simplicity and transparency.  The real challenge now is for the regulators to catch up to the new technical capabilities and work out how to apply them in the most effective way, so much easier preached than done.

The trouble with groundwater models

In the modern world where a development, let’s say a coal mine, extends down below the water-table, specialists almost universally use modeling software to try to guess what will happen when it does.

Hydrogeologists nod their heads and make canapé jokes about how much these models can be manipulated to say whatever their makers want them to say, yet they are virtually all that is used to predict groundwater impacts.  The only exceptions are smaller developments where a hydrogeologist, engineer or plumber depending on the budget and how much the regulators are caring, use their expert knowledge, simpler analytical models and tea runes to make their guesses of impact.

The much more sophisticated, modern numerical models are all based on a brilliant program originally developed by the United States Geological Survey, called ModFlow.  These models represent the ground and groundwater conditions as little cells which transmit simulated water volumes depending on their programmed nature and what water their neighboring cells are transmitting to it. The cells are subdivided into groupings representing strata, landscapes, surface water and subsurface elements, in a way that simplistically represents what the modeler understands of what happens in the earth.  Then they run various scenarios through the model to simulate the impact of the mine or whatever it is, and from this predictions are made.  There’s another important aspect about how models are “calibrated”, but I’ll get back to that.

In unbiased and clever hands, these models are wonderful tools, which along with expert knowledge help us accurately predict the impacts of the proposed development.  In this lovely world, the predictions would be presented to the regulators (on behalf of the government and the people they serve) truthfully and fairly, in a way that specifically addresses all the key risks and reports the uncertainties of the predictions.  This is not, sadly, our world.

In our modern world, the government agency responsible for assessing the groundwater bits of a proposed development will require that a model be prepared in order to predict the impacts.  In most jurisdictions there are standards set for how models are to be developed for this purpose. A model will then be “built” by specialist groundwater consultancies, hopefully with the benefit of having completed some investigations to allow subsurface conditions to be approximated.  The model will invariably be paid for by the developer/proponent, and will usually be presented in a report as an attachment to the Environmental Impact Statement.

In most states, somewhere in an airless basement of one of the agencies will sit a government hydrogeologist, either singly or in a small herd.  They will be given the model report and other information if they’re lucky, and asked to decide (usually within days) if its predictions are true, or at least plausible, and whether, in the agency’s eyes and according to the planning laws of the land, the proposal impacts are acceptable.

Dear reader, I am one of these privileged and underlit groundwater regulators, and I am here to tell you that these models are currently dangerous, and are being used in ways that vary from offering a pretty good prediction to committing environmental fraud.

In my possibly jaundiced view, there are three things badly wrong with modern numerical groundwater models:

  1. The models themselves are almost completely inaccessible to the regulators. They are normally presented as reports, describing the key features of the model and what it predicted.   Even if the model itself is demanded by the agency, it’s very unlikely that the regulator will have the skill (I don’t) or time to look under the bonnet; to question how the model has been built, what assumptions it really makes and whether prediction scenarios could be better formulated.
  2. The model is not designed to answer the question(s) that the regulator or community really wants to know. Groundwater modellers focus their craft on getting the model to represent ground and groundwater conditions in a way that doesn’t defy one or more laws of nature, and to make their modeled world consistent with a selection of field observations, the process of “history matching” we call model calibration.As the guru of model calibration Dr John Doherty has said in many forums, a model should not be designed just to represent the world and an impact scenario.  Each model should be very specifically designed to predict the likelihood of a negative consequence (what the regulator or community is really worried about, such as pollution reaching a water supply or groundwater levels falling below a farmer’s well), and to do it in a way that there is only a very small risk that the prediction will turn out to be wrong.
  3. The way that modeled uncertainties are typically presented is untruthful and frequently misleading. It is not possible to know the earth’s subsurface completely as we must interpret it from boreholes, outcrops and the collective knowledge of what has been studied elsewhere.  Even if we did have perfect knowledge of ground and groundwater conditions, models must make simplifying assumptions if they are to be able to make predictions within the hours or days allotted to them.  These and other issues mean that the models and modellers must make a very large number of judgement calls and assumptions about how they represent the groundwater regime.Despite all of this being known, most modeling reports refer only to simplistic measures of uncertainty and leave the important remainder unsaid.  In truth, the real uncertainty involved in a model prediction cannot normally be measured as incorrect conceptualisations, measurement and structural model noise and the way that the model attempts to internally “fix” parameters which are preventing it from reaching “convergence” are rarely or never known.  Nevertheless, much more clarity could be provided about the uncertainty of a prediction, but it is simply not in the modeller’s or developer’s interest to disclose it.

There are simple solutions to the above issues, but these solutions will take more effort by modelers and won’t happen until the regulators insist that they do.  I would be happy to share my thoughts on how this could be done with anyone, and welcome any thoughts you might have as well.

Certainty through failure analysis – a revolutionary open-source regulatory approach

A big opportunity is arising from the internet of everything; a better way of running the world via open-source and transparent big-data analysis approaches to regulating developments which affect resources and the environment.

Setting up fair and effective regulatory systems that properly assess a development and then, if its benefits to society and the environment outweigh its impacts and the project is fairly approved, efficiently policing them is actually a very difficult business.

The good news is that a host of new capabilities are here to assist, using data from large monitoring programs and analysing them with emerging big data and open source tools.  These approaches are enabling much more accurate and efficient understandings to be rapidly found to inform approval decisions and post-approval policing for resource projects.  The opportunity now arises to develop new regulatory approaches to harness these powerful new capabilities.

The bad news is that many of the complexities of developing regulatory systems simply can’t be bypassed.  Limited resources can’t be fairly shared unless the resource limits are known, potential impacts are understood and until all stakeholders get to have their say about what impacts are acceptable and understand the views of others.  Technology can greatly assist, for example in making community consultation more efficient, but cannot bypass the societal and administrative elements of regulatory systems.  Understanding the nature of the potential impacts and how these would be viewed by stakeholders and understanding the social, economic and environmental values to the community are two examples of difficulties which must be traversed in developing clever new rules to share scarce resources fairly.

The move from old-fashioned, parliament-driven and administratively complex regulatory systems to flexible, data-centric modern approaches is ultimately inevitable because it is more effective, fairer and more cost efficient.  If this imminent regulatory revolution can be led by the open-source community, it will also serve the public good by engaging appropriately with those who will be affected by the development (stakeholders) and be fully transparent so that decisions are made on evidence rather than political expediency.

There are many, many people working on different parts of this regulatory proto-elephant.  Let me draw your attention to two of them, as they are working near what I think might turn out to be the trunk, or at least the front(ish)…

Many environmental specialists will know of Dr John Doherty or, if not him, then his brainchild – the PEST model which is now used almost universally to calibrate complex environmental models.  Dr Doherty is a hydrogeologist with an incredible mind for numerical modelling, but he’s not happy about where it’s going. His master’s view has driven him to call out the modern numerical model’s big paradox – that they’re wonderful tools which have been hijacked by specialist practitioners to blind and hide, rather than reveal, likely impacts.  These models may well be needed because they can help answer questions about how a complex environment might behave if a certain action is taken or development is imposed.  But rather than being used to inform decision makers and the communities they represent, the models are being presented as if they are the end-products themselves, and are not answering the right questions.  This has in most cases arisen from limited competence rather than deliberate intent, but the result needs fixing regardless.

So Dr Doherty and his colleague Dr Catherine Moore are leading a quiet revolution. They’ve been peeling back the tentacles of model complexity to distil a strategy for setting up models to assist, rather than hinder, proponents and regulators to make the decisions they need to make.  The theoretical and empirical basis of their new approach are available for scrutiny at https://www.gns.cri.nz/gns/content/download/12756/67966/file/Simple%20is%20beautifulv3.pdf.

There’s lots of big ideas and much detail in their many papers (see refs in linked document above), so let me try to pick out two key points from Doherty and Moore’s evolving grand plan to analyse and regulate environmental and development impacts:

  • With some effort, environmental models can be used to implement the Scientific Method. This is done by posing a “bad thing” hypothesis and asking the model to reject it on the basis of its predicted incompatibility with information about the system that is encapsulated in the model.  In other words, the first step in making an assessment about whether a development’s impacts will be acceptable is to define a threshold of unacceptability, the bad thing that stakeholders don’t want to happen if the development proceeds.  For example, if a proposed mine will cause flows in a particular stream to drop below a point where a particular ecosystem will be impaired, an unacceptability flow threshold can be agreed with stakeholders and set as the definition of development failure.
  • The next step is to set up the predictive analysis, using the simplest adequate model or other analytical technique, to answer the question of whether the bad thing/failure will occur and, if not, with what level of certainty is the prediction made?  This leads to an outcome where the proponent is required to clearly set out the likelihood that the bad thing will happen, and the certainty with which the prediction is made.  This provides the decision-makers and the community behind them with the ideal tool to understand what is most likely to happen if the development is approved, and what the risk is that the bad thing might still happen.  If the analyses are done using the approach advocated by Drs Doherty and Moore, i.e. set up to purely to inform environmental decision-making, they become the best available tool.

I believe that the Doherty and Moore approach can and will revolutionise environmental modelling analysis to make it fair, transparent and using the most suitable tools for the job.  If all such assessments start with a negotiated definition of failure, or in most real-world situations a set of thresholds defining failure and levels of concern beneath that, we suddenly have a new solid basis for setting specifications for any model that is built to support environmental decision-making.

I know it doesn’t sound very revolutionary or sexy, but compared to the current paradigm of regulators who can’t understand whether their concerns have been addressed and obstructive or non-definitive analysis of these concerns by modellers, it’s absolutely profound.