Understanding Climate Risk

Science, policy and decision-making

But is it just red noise?

leave a comment »

I gave a seminar yesterday at the ARC Centre of Excellence for Climate System Science at the University of New South Wales. Thanks Alvin Stone and Andrea Taschetto for organising it. It’s the first time I’ve had the opportunity to go through the entire ‘step change’ hypothesis of how the climate changes, the theoretical background, structural models developed from that and how the testing was set up, prior to showing a whole raft of test results.

One of the questions I got at the end, which also comes up quite often in the literature, was about the potential cause of the step changes in temperature data. It came from a question as to whether we had tested the step change model with artificial data that had been ‘reddened’ – that is, made dependent on the previous data. Such time series can have long-term persistence and contain a number of different quasi-periodic timescales, so do not conform to a single statistical model. This line of questioning alludes to whether a step or nonlinear response in a time series needs to be have an underlying cause that can be linked to an external source or whether it’s the result of random variations (see paper by Rodionov for a more more technical description). I gave a somewhat flip answer – because there is real energy in the system we are assessing (the climate system), whether a rapid shift is due to red noise or not matters less than understanding what that means for risk.

One of the big problems with complex data, is that no test is foolproof and a specific test can only treat an aspect of that data. The bivariate test that we use is sensitive to underlying trends most and less so to autocorrelated or lagged data. Combining random stochastic data and autocorrelation can produce step changes in statistical data that will register as p(ho) significant with the bivariate test (and other change-point tests) with quite small shifts. We also find when generating artificial data with lagged autocorrelation, trends, stochastic variability and steps, the resulting shifts often do not line up with the underlying forced shifts in the data. This is quite plausible, because a step or short run forced by red noise and another underlying step may combine to create another displaced step somewhere between. Significance is less of an issue as to whether a shift matters or not. However, discarding such noise to find ‘predictable’ changes in trend is not at all productive.

A major issue with attribution in climatology is that signal and noise are considered differently, with signal ‘good’ and noise ‘bad’. There are two related points to this:

  1. If we are looking at a physical system, then physical constraints will govern what a statistical model can represent. For example, a highly autocorrelated random walk can, in theory, wander a long way from a baseline but in a system with energy limits will not be able to do so. This is the case in systems with long-term persistence, such as those represented by hydro-climatological time series. (Unstable systems however, can sometimes ‘walk’ into a new state.)
  2. An externally-driven response associated with a signal that can be projected with some degree of model skill is seen as a ‘good’ outcome, whereas a similar phenomenon driven by red noise that is considered random is seen as a ‘bad’ outcome. However, if that response is associated with a degree of risk, values should not be attached to whether it is predictable or not but to the values at risk if that response occurs. For example, a step change in a physical system, whether created by external drivers or internal variability will result in a degree of risk. Both deterministic and random cases can be represented by scenarios. Aspects of risk should not be omitted from decision support just because they are assumed to be noise.

For the first point, statistical inference by itself is inadequate to infer a potential outcome and a structural model of how the system works, preferably backed by theory is required. However, statistical inference exploring all plausible alternatives is required to ensure that a proposition can be severely tested.

For the second point, Koutsoyiannis (2010) in his article “A random walk on water“, argues that the good and ‘evil’ attached to signal and noise models is counter-productive when assessing complex systems. This is a caution about scientific values and understanding the complexity of a system without rejecting uncertainty. I fully agree, but as to the rest of his paper, he is quite pessimistic about the role of determinism in understanding the role of climate and hydrology. I am less pessimistic and think we will find some determinism in unpacking the so-called noise, if complex behaviour can be called deterministic.

Advertisements

Written by Roger Jones

May 29, 2016 at 8:35 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: