Models Methods Software

Dan Hughes

Hard Concepts

Boy, it’s difficult to get my mind around many of the concepts discussed in the post Tracking down the uncertainties in weather and climate prediction.

Updated July 10, 2010.

I have looked around but have not been successful in finding additional material from either the meeting or the presentation. I suspect all information presented at the meeting will eventually show up on the CCSM Web site.

Here’s a part that I find to be very unsettling. Starting at the 15th paragraph in the post.

And now, we have another problem: climate change is reducing the suitability of observations from the recent past to validate the models, even for seasonal prediction:

Figure Uncertainty2. Climate Change shifts the climatology, so that models tuned to 20th century climate might no longer give good forecasts

Hence, a 40-year hindcast set might no longer be useful for validating future forecasts. As an example, the UK Met Office got into trouble for failing to predict the cold winter in the UK for 2009-2010. Re-analysis of the forecasts indicates why: Models that are calibrated on a 40-year hindcast gave only 20% probability of cold winter (and this was what was used for the seasonal forecast last year). However, models that are calibrated on just the past 20-years gave a 45% probability. Which indicates that the past 40 years might no longer be a good indicator of future seasonal weather. Climate change makes seasonal forecasting harder!

The conclusion, “Climate change makes seasonal forecasting harder!” is basically unsupported. There are a very large number of critically important aspects between ‘Analysis” and “Changed climatology” that are simply skipped over.

Firstly, the Analysis has been conduced with models, methods, computer code, associated application procedures, and users, any one of which separately, or in combinations with the others, could contribute to the differences between the 40-year and 20-year hindcasts. Secondly, within each of these aspects there are many individual parts and pieces that could cause the difference; taken together the sum is enormous. Thirdly, relative to the time-scales for climate change in the physical world 20-years seems to be kind of short and maybe even 40 years is, too. Fourthly, no evidence has been offered to show that climatology has in fact changed sufficiently to contribute to the difference.

The presentation seems to have leapt from (1) there are differences, to (2) the climatology has changed. I find this very unsettling. The phrase, Jumping to conclusions, seems to be applicable.

With the given information, I think about all we can say is the the models, methods, code, application procedures, and users did not successfully calculate the data.

I don’t see that any ‘tracking down’ was done.

Advertisement

July 7, 2010 - Posted by | Uncategorized | , , ,

2 Comments »

  1. No, we know the climate has changed over the last 40 years, and also that the change has accelerated over the last 20 That’s not in dispute. The question isn’t whether this change has happened. The question is whether this change is enough to cause a reduced forecast accuracy in models that are calibrated to a late 20th century climate. Julia will argue it is.

    To get a sense of why we know this, just look in at the observational datasets over the last 40 years, or read the summary in the first few pages of each of chapters 3, 4 and 5 in the AR4 WG1 report:
    http://www.ipcc.ch/ipccreports/ar4-wg1.htm

    So, of course it looks like we skipped a step: you’re questioning something that is well understood by every one of the 350 scientists in the room when Julia gave her talk, and therefore didn’t need any more explanation. The specific example she used of comparing a 20 year to a 40 year calibration wasn’t intended to be proof that climate change is affecting forecast accuracy, just one illustrative datapoint that’s consistent with the other evidence. This particular example might be caused by something else, but right now, changing climate is the best explanation, and one that fits best with all the other evidence.

    Notice the tentative phrasing “might no longer be a good indicator…” We’re starting to notice a problem here, we have one very good explanation for it (climate change), and no other simple explanation. To pin this down more clearly, we need a systematic research effort into causes of uncertainty (which is what Julia argued for at the end of her talk).

    Comment by Steve Easterbrook | July 13, 2010 | Reply

  2. Dan this :

    Models that are calibrated on a 40-year hindcast gave only 20% probability of cold winter (and this was what was used for the seasonal forecast last year). However, models that are calibrated on just the past 20-years gave a 45% probability. Which indicates that the past 40 years might no longer be a good indicator of future seasonal weather.

    says it all or almost . The choice of the number 40 can be considered as random . It could have been 37 or 44 .
    Now , as the climate is a result of a chaotic process , we both know that only probabilities can be predicted .
    And that only if the climate can be proven ergodic .
    It is more than probable that of those 300 and some present at the meeting you reported about , 80 % do not realise what ergodicity means and the remaining 20 % don’t know if climate is ergodic or not .
    In any case finding that an event happened whose probability was estimated at 20 % is not shocking . 20 % is a pretty big probability .
    What is shocking is to attach any importance whatsoever to the fact that one finds 45 % when calibrating only over 20 years .
    So one changes the calibration to 40 years untill next year a very cold unforeseen summer happens and people say that with 27.5 years calibration the probaility of this event would have increased .
    etc etc etc .

    What these people do NOT consider and what is for me the simplest and most probable explanation is that the models THEMSELVES do not capture at all the probability distribution of regional events .
    Clearly the moving target of variable calibration hindcast periods solves nothing and again it is extremely simple to understand why – in the hypothesis that the models do not capture the regional PDFs , there does NOT exist any unique hindcast calibration period .

    There is still much work to be done at the fundamental level – spatio-temporal chaos , ergodicity , error propagation to jump immediately to some numerical models which will anyway always give results all over the place .
    This irrational conviction that a computer can give all answers and that the only problems are to get more number crunching power , is the source of the biggest problems .

    And if there is one domain where it has been already proven that the computer cannot give a true solution to the equations (even not the correct ranges) is the Lorenzian chaos .

    Comment by TomVonk | August 3, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: