Understanding Climate Risk

Science, policy and decision-making

End of the hiatus

with 4 comments

Understanding Climate Risk has been in something of a hiatus, or a pause for the last couple of years due your host being almost fully submerged, but maybe it’s time to rise to the surface and get things going again.

This is for a few reasons. One is that research, especially public good research and especially in CSIRO, is under serious threat in Australia. We have a government who tout innovation, but who wilfully ignore the role of the generation of underpinning knowledge in fuelling such innovation. They are interested only in commercial innovation – public-good innovation is not only being ignored, it is being excluded from processes such as the Cooperative Research Centre bids currently under way. Having sustainable cities, catchments and ecosystems is impossible without public good research and social innovation, with funding that extends across the sciences, the humanities and the arts. With an election going on, these harms need to be publicised.

Globally, research is also at something of an impasse, which also is being reflected in Australia. There is more research being published than ever before, but its quality is declining. This is because, to preserve profits, tight page and word limits are ensuring that the research being published either:

  • rediscovers what was already known (because the spread of knowledge production has become so broad that the left hand has no memory of what the right hand has written),
  • makes tiny little advances that can be described in preferably 3,000 words but not more the 6,000.

This means that big, new ideas that are sprawly, contentious and kick out at established paradigms are discouraged. Word limits and formulaic publishing models means that the author(s) can only get out one part of their argument, making it easy for reviewers to shoot down. Self-interest is talking here, of course.

The purchaser-provider model of research funding and reducing funding in real terms has made the whole research environment much more conservative. Research proposals need to present little risk (it is helpful if the outcomes are known in advance). Many of the unpaid roles of the researcher, including providing a free editorial and reviewing workforce for profit-making publishers, are going unsupported, because supporting grants directly now occupies about 150% of any researcher’s time. The increase in the number of people seeking research funds as more people become qualified, and high rates of casualization means that the career path for younger researchers is becoming increasingly difficult. Researchers need to be accountable, but widget-like production schedules tied to a limited number of metrics doesn’t cut it.

Solutions for all these issues need to be explored and discussed. Perhaps new models are needed, but I’m not sure what they might be. They are worth airing and discussing.

So what’s the big idea?

Regular readers will be familiar with this …

The hypothesis is that on decadal timescales climate change is abrupt rather than gradual. Rather than warming on a year-to-year basis, relatively stable periods are punctuated by warming (cooling) events. Under continuous external forcing, over the longer-term these abrupt changes will form a complex trend. Therefore, if climate-related risks are characterised on the assumption that change is gradual, those risks will be underdetermined.

Put simply, warming and related changes occur as steps, rather than trends. Over the long term (50+ years) they conform to a trend (as long as no boundary conditions, such as collapsing ice sheets, are breached).

The last couple of years’ hiatus in blogging have been taken up with writing the hypothesis, gathering the evidence and assessing the history and philosophy of science. One question that has driven this work is “Why are we using mechanistic statistical models to analyse complex system behaviour?” This effort started off as one long paper intended for Frontiers, but the controversy surrounding the Lewandowski et al. (2013) paper in Frontiers in Psychology, led to my resignation as associated editor.

That one paper expanded and became four. They were interlocking so I didn’t want to split them up. I asked the editors of one journal whether they would consider all four. They said maybe two and the others separately. That idea was shelved temporarily, which due to circumstances, looks to be permanent. A couple are possibly now redundant.

Meanwhile, I had been joined by Jim Ricketts to do a Ph D on testing the proposition. The first step of his Ph D was to take the bivariate test of Maronna and Yohai, which was being run manually to detect step changes, and put together an objective rule-based program that would run multiple time series. That took way longer than anticipated, but when finally completed (pdf), exceeded expectations.

Once the multi-step Maronna-Yohai model was working, getting papers out that described step changes for observed and modelled warming became a priority. All along, there were also other projects that needed to be attended to (as my long-suffering colleagues will attest).

Two papers, one on observations and the other on models were submitted November last year to a prominent journal. They were returned in a week’s time with a note saying the journal had already published a lot of papers on climate trends. The editors had either not read them properly or not understood the argument (which we thought pretty clear). The papers were then rewritten, expanded slightly and submitted a slightly less prominent journal in early December. In January, they came back, rejected with prejudice. One reviewer dismissed the test we use, pretty much saying “it’s rubbish” and came back with a statistically-dubious justification of trends (this from an excellent statistician, but he has skin in the game, so is not a disinterested party). The science in the paper was completely ignored, presumably because it could not be true.

If the bivariate test used to detect the step changes is rubbish, all the homogenization of climate records based on its use should be recalled and redone. As should all such adjustments made using similar tests, because they get common results in locating inhomogeneities in observed climate data. There is nothing wrong with the test and we know where its weak points are. We also know where the weak points of trend analysis are – and they are almost identical, because they are based on the same underlying assumptions of serial independence.

By this stage, there were six papers, all unpublished. They covered the hypothesis, the evidence, the history and philosophy of the gradual change paradigm, the implications if climate is not gradual and analyses of step changes in observed and modelled temperature. Given the reception given the last two papers it was clear that if they were released separately as shorter papers with partial arguments, they were going to be given short shrift. Also, by this stage the whole thing had become Ahab’s white whale, an obsession that wouldn’t rest until it was harpooned.

Meanwhile, papers were appearing in the literature written by researchers who knew what we are proposing, saying climate has to be trend-like, and anything else is impossible. So even before our papers have been published, they have an active opposition. It doesn’t help that one of the main contrarian arguments against climate change theory is that step-like changes disprove greenhouse theory – they don’t but some in the climate research community have created an opposing movement dedicated to defend the trend. Other papers also pointing out step-like changes in climate time series are appearing, mainly in fringe journals. These are generally ignored, because everyone knows step changes must be due to climate variability.

Changepoint_monthly

How change point analysis views global warming (with apologies to Skeptical Science)

By this time, it had become clear that only papers with a comprehensive argument would survive review and any such paper would be way longer than the word limit in most journals. This is the ‘extraordinary claims require extraordinary evidence’ principle, which we agree with, but the current publishing model hardly allows it. We have plans that get around this, but it takes a lot of preparation.

To bolster the argument underpinning our proposition another philosophy paper was drafted. It has two main themes:

  1. One theme argues that fighting the climate wars using global mean temperatures is a bad idea, that the core theory of greenhouse is sound for other reasons (radiative physics) and that global mean temperature cannot either prove or disprove the theory. Climate wars fought on this basis, scientifically are a waste of time – there may be other reasons but they are metascience, politics and risk, not science.
  2. The second theme takes up the idea of severe testing as devised by Deborah Mayo and developed by Mayo and Spanos, adapting it to the issue of whether climate change and variability are independent of each other (producing gradual warming) or whether they interact (producing nonlinear warming).

A hypothesis H passes a severe test T with data x0 if,

(S-1) x0 accords with H, (for a suitable notion of accordance) and

(S-2) with very high probability, test T would have produced a result that accords less well with H than x0 does, if H were false or incorrect.

Equivalently, (S-2) can be stated:

(S-2)*: with very low probability, test T would have produced a result that accords as well as or better with H than x0 does, if H were false or incorrect.

A subsequent paper combines the observed and model-based temperature papers and applies the severe testing element – this has doubled the word count of the original two papers. Part of that paper involved the severe testing of trend analysis, which, applying Mayo’s rules, has never passed a severe test (as described by S-2). Several statistical hypotheses are possible (e.g., Franzke, 2012). If readers are aware of such papers where trend analysis has been tested and passed (without those tests being based on the assumption of trend-like behaviour), I would be keen to see them. We conclude that step changes pass a series of severe tests in better shape than trend analysis. In the short term (one to a few decades) this produces steps that integrate into a long-term, complex trend.

So why outline all of this now, with most of the work still unpublished? (Earlier papers developing the basic themes are available in the literature and online).

  1. The process is of interest, because it has shed some light (as least to me) on how science can become more rigorous in the way it is conducted using recent developments in understanding the history and philosophy of science. Potentially this could contribute to IPCC assessments. The way science is currently carried out can be improved and some widespread misapprehensions (e.g., canards about Popper’s falsifiability) more publicly debunked.
  2. The current spike in temperature is probably a shift in global climate to persistent hotter conditions, which is important for understanding ongoing risk. It is a process that needs to be better understood most urgently.
  3. It would be good to foster a public debate on what this hypothesis means, to continue to test it rigorously and, if more widely accepted, explore how its ramifications can be better understood.
  4. The impact of the climate wars on climate change science, the waste, the defensiveness and the lazy thinking that has accompanied that defence has chewed up so many people’s time and resources. There is more to write about this, but much of the return fire aimed at contrarian claims has been counter-productive, because it has been addressed in their terms. The research community cannot openly admit en masse, they don’t know how climate changes on decadal timescales – it is hidden away in the chapter texts of IPCC reports (See Box 11.1, paragraph 1 in link) then subsequently ignored in practice. Instead, there is a continual re-emphasis on the long-term trend. There is no objective way to distinguish a short-term from a long-term trend – the contrarian community know this and know it’s a weak argument so they keep hammering this point. There’s no time to waste – the nonlinear behaviour in climate needs to be properly explained, not statistically smoothed into a straight line. That is real energy out there, not just some funny numbers to be waved away.

 

 

Advertisement

Written by Roger Jones

May 22, 2016 at 1:19 pm

4 Responses

Subscribe to comments with RSS.

  1. Fascinating, Roger. My humble suggestion is to try online publishing, as James Hansen did before the Paris conference last year, and invite serious review comments, or publish a book, but then you’d need a supportive publisher with a contract up front to make the effort worthwhile.

    Of course Hansen with his reputation starts from a different place.

    Anyway, best of luck, whatever you try!

    Brian

    May 22, 2016 at 3:21 pm

    • Thanks Brian,
      we are trying a very similar path to Hansen’s. Self-publishing and a journal with open review.

      A book would take longer to write but there’s enough material. Finding a publisher is no problem but this isn’t preferred because of the time delay.

      Plan C would be to get it examined as a D Sc thesis but that would cost $7.5 k for the examination.

      Roger Jones

      May 22, 2016 at 3:57 pm

  2. Just reading this post and not your articles/manuscripts, I am left with the question whether it is ever theoretically possible to distinguish between
    * step-like changes (with white noise) and
    * a gradual change with auto-correlated noise.

    The distinction between short-term and long-term trends in the public “debate” is just trying to formulate without math that that the uncertainties in the trend estimates are very different and that trend over short periods are very prone to cherry picking. In the scientific literature you can (objectively) compute the uncertainties of the trend estimates and whether there is statistical evidence for a trend change.

    Victor Venema

    May 22, 2016 at 9:45 pm

    • Victor,

      yes and no. I would challenge that the uncertainties of trend estimates in temperature can be objectively computed. That rests on the assumption that the data are serially independent. They are normally distributed but exhibit heteroskedasticity. Because of the complexity of the climate system, the autoregressive elements are unknown. There is an ENSO element that works on about 2 and 7 years; there are other harmonics in the system including a 22/23, 11, 60-year harmonic and so on. In an environment of external forcing there is no control with sufficient accuracy to set a base case. This includes model control runs and palaeoclimate proxies.

      Julia Slingo captures this in her paper Statistical models and the global temperature record when she says this:

      “The results show that the linear trend model with first-order autoregressive noise is less likely to emulate the global surface temperature timeseries than the driftless third-order autoregressive integrated model. The relative likelihood values range from 0.001 to 0.32 for the time periods and datasets studied, where a value of 1 equates to equal likelihoods. This provides some evidence against the use of a linear trend model with first-order autoregressive noise for the purpose of emulating the statistical properties of instrumental records of global average temperatures, as would be expected from physical understanding of the climate system.

      This is not, however, evidence for the efficacy of the driftless autoregressive integrated model. Similar comparisons between the driftless (trendless) model and two autoregressive integrated models that allow for drift (trend) give likelihood values ranging from 0.45 to 2.58 for the HadCRUT4 dataset. The comparison is therefore inconclusive in terms of selecting the notionally best model. Furthermore, these comparisons do not provide evidence against the existence of a trend in the data.

      These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.”

      Also Cohn and Lin (2005), provide a table of trend significance with underlying statistical models:

      Table 1. Estimates of Trend Magnitudes and p-Values Corresponding to Various Models Fitted to the Annual Northern Hemisphere Temperature Departure Data, 1856–2002
      H0 Process p-Value
      White noise 1.8e-27
      MA(1) 1.9e-21
      AR(1) 5.2e-11
      LTP 4.8e-3
      LTP 9.4e-3
      ARMA(1,1) 1.7e-4
      LTP + MA(1) 7.2e-2
      LTP + AR(1) 7.1e-2

      Cohn TA, Lins HF (2005) Nature’s style: Naturally trendy. Geophysical Research Letters 32 (23):L23402. doi:10.1029/2005GL024476

      This is why it’s important to be able to lay out an entire argument for a non-standard proposition, otherwise each (reasonable) point needs to be answered in detail when it’s raised.

      Roger Jones

      May 23, 2016 at 2:41 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: