Victor,

yes and no. I would challenge that the uncertainties of trend estimates in temperature can be objectively computed. That rests on the assumption that the data are serially independent. They are normally distributed but exhibit heteroskedasticity. Because of the complexity of the climate system, the autoregressive elements are unknown. There is an ENSO element that works on about 2 and 7 years; there are other harmonics in the system including a 22/23, 11, 60-year harmonic and so on. In an environment of external forcing there is no control with sufficient accuracy to set a base case. This includes model control runs and palaeoclimate proxies.

Julia Slingo captures this in her paper Statistical models and the global temperature record when she says this:

“The results show that the linear trend model with first-order autoregressive noise is less likely to emulate the global surface temperature timeseries than the driftless third-order autoregressive integrated model. The relative likelihood values range from 0.001 to 0.32 for the time periods and datasets studied, where a value of 1 equates to equal likelihoods. This provides some evidence against the use of a linear trend model with first-order autoregressive noise for the purpose of emulating the statistical properties of instrumental records of global average temperatures, as would be expected from physical understanding of the climate system.

This is not, however, evidence for the efficacy of the driftless autoregressive integrated model. Similar comparisons between the driftless (trendless) model and two autoregressive integrated models that allow for drift (trend) give likelihood values ranging from 0.45 to 2.58 for the HadCRUT4 dataset. The comparison is therefore inconclusive in terms of selecting the notionally best model. Furthermore, these comparisons do not provide evidence against the existence of a trend in the data.

These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.”

Also Cohn and Lin (2005), provide a table of trend significance with underlying statistical models:

Table 1. Estimates of Trend Magnitudes and p-Values Corresponding to Various Models Fitted to the Annual Northern Hemisphere Temperature Departure Data, 1856–2002

H0 Process p-Value

White noise 1.8e-27

MA(1) 1.9e-21

AR(1) 5.2e-11

LTP 4.8e-3

LTP 9.4e-3

ARMA(1,1) 1.7e-4

LTP + MA(1) 7.2e-2

LTP + AR(1) 7.1e-2

Cohn TA, Lins HF (2005) Nature’s style: Naturally trendy. Geophysical Research Letters 32 (23):L23402. doi:10.1029/2005GL024476

This is why it’s important to be able to lay out an entire argument for a non-standard proposition, otherwise each (reasonable) point needs to be answered in detail when it’s raised.

]]>* step-like changes (with white noise) and

* a gradual change with auto-correlated noise.

The distinction between short-term and long-term trends in the public “debate” is just trying to formulate without math that that the uncertainties in the trend estimates are very different and that trend over short periods are very prone to cherry picking. In the scientific literature you can (objectively) compute the uncertainties of the trend estimates and whether there is statistical evidence for a trend change.

]]>Thanks Brian,

we are trying a very similar path to Hansen’s. Self-publishing and a journal with open review.

A book would take longer to write but there’s enough material. Finding a publisher is no problem but this isn’t preferred because of the time delay.

Plan C would be to get it examined as a D Sc thesis but that would cost $7.5 k for the examination.

]]>Of course Hansen with his reputation starts from a different place.

Anyway, best of luck, whatever you try!

]]>