Climategate V2 emails: exchange on the extremes Table 3-10 in the IPCC TAR
The second tranche of emails stolen from the University of East Anglia’s Climate Research Unit’s servers has been released by a group who are trying a Wikileaks take on how climate science is done. Last time, given their origin, I wouldn’t have anything to do with them. Now it is claimed there are some 200,000 emails in the database, so their likelihood of eventually being made public is pretty high. The latest 5,000 were selected on specific keywords but beyond that are random. Given that likelihood, the way that science is being challenged on “free” speech grounds by opponents of specific scientific findings, and the need to acquaint the wider public with how science is put together, it seems worth illustrating some of those exchanges in full. This is also to counter the egregious quote mining, out-of-context selection and revivification of already investigated claims going on in the blogosphere. Even the quotes taken from the README.txt file accompanying the leaked emails, which is the only part that most journalists will have read, are wildly out of context.
The following exchange is about an early version of Table 3-10 from Chapter 3, Working Group II Developing and Applying Scenarios in the Third Assessment Report. The table was a lot more complex than the final version, having stars denoting confidence and a wide range of extremes. After government review, the response was that the table was too complex and that the most important information to impart were the extremes known with relatively high confidence and their likely impacts. The final version shown to the right reflects those reviews.A couple of things struck me on reading through these exchanges. 1. Some of the issues such as likelihood vs confidence are still live issues that haven’t been sorted out. I cover some of this in my paper on the latest IPCC uncertainty guidance. 2. Likelihood of change in these extreme events hasn’t changed a lot between 2000 and now although the confidence in them has. This suggests that as the science becomes better understood, our ability to predict outcomes has not improved greatly due to the inherent complexity of the climate system. 3. Translating information across working groups has always been tricky. Our job in Working Group II is to translate climate science into forms that can be used in risk assessments of impacts and vulnerability and risk management through adaptation.
I’m sure readers will agree that what is going on here is not a conspiracy, is not being gamed for maximum effect, but in fact shows a group of people trying to craft accurate and appropriate messages from the science for use in decision-making. The emails have been edited for layout, a few spelling mistakes have been corrected </pedant> and a couple of current email addresses have been altered. Otherwise they are as sent.
cc: “Pittock,Barrie” <firstname.lastname@example.org>, email@example.com, “Pittock,Barrie” <firstname.lastname@example.org>, lindam ucar, “Jones, Roger” <email@example.com>, m.hulme uea, firstname.lastname@example.org, meehl ncar, “Whetton, Peter” <email@example.com>, firstname.lastname@example.org, email@example.com
date: Mon, 23 Oct 2000 10:37:12 -0700 (PDT)
from: Stephen H Schneider <firstname.lastname@example.org>
subject: RE: Table 3-10: a third version and some other considerations
to: Timothy Carter <tim.carter vyh.fi>
Hello all–very constructive exchange. I agree with Barrie that downgrading a star maybe a better reading of the literature–but since I’ve only read half of it I don’t have a strong opinion. I do think two stars would be wrong because that implies we have a great deal of information–thus confidence–that the event is pretty unlikely. Tim, you are indeed right that medium confidence means indifference to more or less since 50% is the random event.
THis is why both in umpteen e-mail reviews of SPMs and TS (as well as in the guidance paper) I have tried–mostly in vain–to get people to make positive assertions without qualifiers like could. Then medium confidence has much more meaning. For instance, your table goes at least half way–you do specify the year and rough climate scenario.
The best thing would be to make a real estimate of what might happen then–like the 10% increase in hurricane intensity–or give a range, say, temperature will increase by 2-4 deg C. Then a medium confidence is a pretty affirmative statement of what we think we know. Medium confidence is true, virtually by definition, when we restrict ourselves to predicting just direction of change and haven’t much extra info to push it up or down. Nevertheless, it does make sense to keep it here, since the WG 2 assignment is for consequences, and if it is consequential to have an event that we deem equally likely to happen and it matters, then so be it–this is represented by your last table with the impacted sectors explicit.
Of course, it would be more controversial and take a sub group months to craft a real range of projections for 2100 for a given scenario, but then the confidence scale would be more meaningful. But at this stage just stay with what most of us seem to be able to live with–directions of change–and thus we’ll have medium confidence almost by definition for those categories where the state of the science doesn’t push our confidence in the projection much higher or lower–and that is, as I said, non-trivial information for policymakers who otherwise would be clueless whether such events were expected.
PS Just got Roger Jones good comments and I agree that we sometimes confuse confidence in events with confidence levels in the state of the science–shouldn’t give low confidence on the occurrence of an event when we mean speculative science and we don’t really know the likelihood of the event very well. I’m not sure how to deal with this operationally for the Table at this stage, other than to add a phrase to this effect in the already long list of footnotes at the bottom for those entries for which this problem fits.
On Mon, 23 Oct 2000, Timothy Carter wrote:
Dear Barrie et al.,
A few responses/queries to Barrie’s helpful comments before Australia logs out for the day:
I agree with the caption, but would like to see an explicit, up-front statement of the need to provide appropriate risk assessment advice for decisionmakers in WG2, as part of the text. This would also appear in the SPM. I.e., we need to make clear why we consider possibilities which are not in the high confidence categories, and which are not necessarily based on AOGCMs at their coarse resolution, but also on high resolution limited area models and on physical reasoning and observed correlations.
[TC] I think we have some of the latter information in the caption and footnotes. I agree that changes to the text will be needed.
Am I right in interpreting the views I have heard so far as arguing for retaining this Table (simplified version) in the SPM?
We should also consider wording that admits that there is a range of expert opinion, and that the quantitative uncertainties given are therefore a collective but not universally agreed view. I stand by to help with that wording if you ask Tim.
[TC] Are you (Barrie) suggesting that this should come in the caption or the text?
At the risk of Tim tearing his hair I would also like to make a few detailed comments on the entries.
Re TC “increased frequency”, I know several vocal TC experts who argue strongly that this is highly unlikely globally (see the two reviews by them published in BAMS in recent years), and would therefore be happier to downgrade the confidence for future increases to two stars. The footnote should remain, as that is the main thing re TC frequencies (they will change regionally in frequency, I think with high confidence).
[TC] I agree that 4 stars is too high.
This was a response to the WG I table entry “some models”, meaning that all models analysed showed this, that theoretical studies show such a change, but that “few current models are configured in such a way to reasonably represent such changes”.
However, Easterling et al. have possible (= 3 stars), and I would suggest downgrading to 3 stars rather than 2, for consistency with Easterling et al.
Caveat re. changes in regional TC frequencies is still footnoted, but I think we should keep 3 stars for the global changes as the Table entry.
Re a more El Nino-like mean state. I cannot understand why footnote c is there. Practically all the evidence is from GCMs, and some is pretty detailed analysis of frequency distributions of east-west temperature contrasts in the simulations. One from our lab even looks at an ensemble of runs (only three) and compares with observations, and suggests a reason for a delay in appearance (Cai and Whetton, GRL 27, 2577-2580, Aug. 15, 2000).
However, I would also like to see the number of stars reduced to three
As the AOGCMs are not all that good and there is some disagreement (see the latest PCMDI Report No. 61 “El Nino Southern Oscillation in coupled GCMs”, Sept. 2000 for a full account of the model shortcomings).
[TC] This is given a “likely” label by Easterling et al., who also note “no direct model analyses, but these changes are physically plausible on the basis of other simulated model changes”, which explains the footnote (c).
Furthermore, in the footnote to the WG I SPM table, it states: “recent trends for conditions to become more El Nino-like in the tropical Pacific are projected to continue in many models, although confidence in such projections is tempered by some shortcomings in how well El Nino is simulated in global climate models”
I’m open to dropping the footnote, if you feel that is appropriate. I suppose we could downgrade this to 3 stars from 4 stars – any other opinions?
Both these suggested changes (TCs and ENSO) would be more in line with WG1 assessments I think, and not preclude WG2 considering the possible impacts of the possible changes.
I do wonder if there should be more added in the text to justify our WG2 confidence levels, such as more references to the literature, especially to some that may not have made it in WG1 (eg., how is hail and lightning dealt with in WG1?). I have not yet read the WG1 SPM to see, nor of course the latest changes to their chapters. But I leave that to Tim and others’ judgement as they are more involved more with WG1.
[TC] Agreed, I think we should reinstate the references we had in an earlier version of Table 3-10 on hail and lightning.
Final comment: I want to have a sense that we are all on the same wavelength concerning the meaning of the 3 stars (medium confidence) category. This is defined as 33-67% likelihood. Am I right in thinking that, baldly expressed, it implies an equal likelihood that something may or may not happen. Some WG I authors thought that the term medium confidence conveyed the idea that “though we don’t have high confidence in something happening, we do have medium confidence, implying that though we may harbour some doubts, we still think on balance, that it will happen”
Sorry to hark back to endless prior discussions, but better here than in Plenary!