A GISS ModelE code fragment
The several Global Climate Model or General Circulation Model (GCM) codes around the world are the work horses for the ‘what-if’ studies relative to the Global Average Temperature (and many, many other things) that appear in the IPCC Assessment Reports. There are about 20 of these codes, more or less. The actual models in the codes vary considerably from code-to-code and as a function of time with a given code. The GISS ModelE code is the NASA versions of GCMs.
Some GISS/NASA information about the ModelE GCM code is available here. There you will find information about the documentation of the model and code along with access to download the source and and an online HTML-based code viewer.
From time-to-time I like to browser around in the coding looking at properties such as in-line documentation, physical models that I can relate to, and searching for things like a data dictionary. Basically there is a notable lack of detailed documentation for either the theoretical models, numerical solution methods, and especially the contents and purpose and structure of the routines in the code. The data dictionary would be very helpful relative to understanding the variable names and units, along with a wealth of additional information. A lot of reverse engineering is required in order to understand all but the very simplest routines and functions. And, one can not be sure that the correct understanding has been extracted by use of reverse engineering.
The last time that I was in the coding I ran across the following code in subroutine DYNAM listed here:
C**** This fix adjusts thermal energy to conserve total energy TE=KE+PE
C**** Currently energy is put in uniformly weighted by mass
call conserv_PE(PEJ)
call conserv_KE(KEJ)
TE=(sum(PEJ(:)*DXYP(:))+sum(KEJ(2:JM)))/AREAG
ediff=(TE-TE0)/((PSF-PMTOP)*SHA*mb2kg) ! C
!$OMP PARALLEL DO PRIVATE (L)
do l=1,lm
T(:,:,L)=T(:,:,L)-ediff/PK( L,:,: )
end do
!$OMP END PARALLEL DO
starting at line 234 in the listing that I copied from the code viewer. The introductory comment is enlightening in itself. I think that this coding cannot be associated with any first-principle statement like the conservation of energy. If that was true I think the comment would have made that very clear; instead it is characterized as a ‘fix’.
The variable ‘ediff’ is the difference between the most recently-calculated sum of kinetic plus potential energy and the initial value of the sum calculated at the start of the calculation.
It has been difficult for me to track down the units of all the variables in the coding, and I have not been successful. I have determined that the temperature is very likely the potential temperature and that the variable ‘PEJ(:)’ has units [PEJ(:)] = [J/m2]; as stated in subroutine conserv_PE. I find the units situation to be kind of messy. I don’t know why the code internally does not use a purer form of SI and handle all the conversions at input and output. I suspect that eventually an error in the units will be discovered. I think the ‘! C’ at the end of the line where ‘ediff’ is calculated might mean degrees C, but I am not certain, and I think that is not correct. The coding within the ‘!$OMP PARALLEL DO PRIVATE (L)’ loop in which ediff is subtracted from ‘T(:,:,L)’ divides ‘ediff’ by another pressure, it looks like. And that pressure does not seem to be the same as in the line where ‘ediff’ is calculated.
What this ‘fix’ does, in my opinion, is adjust the calculated temperature for the difference between the recently calculated sum of kinetic plus potential energy and the initial value of the sum when the calculation was started. This might be argued to be some kind of dissipation, but that is not correct. The important point is that this adjustment is purely an artifact of the numerical methods used in the codes. The quantity ‘ediff’ is solely a result of the numerical methods and can not be based on any kind of physical-basis argument.
Thus the potential temperature reported by the NASA GISS code seems to be the potential temperature plus an artificial numerical contribution.
I do not find any diagnostic evaluation of the magnitude of ‘ediff’. If the temperature is being adjusted by an artifact of the numerical methods, I think it should be monitored. Most importantly, it should never be done in the first place.
All corrections and additional information are appreciated.
I think it might be helpful to set a context to this discussion – what significance does this model have and what are the claims made for the model?
I went over and looked at some of the code. Pretty cool! But there’s too much of it for any one person to have much luck finding much. (And my days of Fortran programming are long, long past.) I hope you get a core group of interested people here who can volunteer to go looking for any particular thing that’s being sought. Probably you should have one thread where people post up interesting finds they made, one for questions about where to find certain things and another perhaps where people volunteer to do stuff.
Here’s an interesting fragment from the sea ice subroutine …
Note that the “pond melt season” is defined as the months of June and July … I wonder if anyone has notified the sea ice that the pond melt can only accumulate in June and July …
w.
Well Willis, you should know that the Arctic has a very short summer…
A couple of things. First I haven’t programmed in Fortran since 1984 or so, you have to forgive me. Have you tried downloading the code and putting it into a browser so that you can see what really happens with these values?
Second, in my experience with “real code(tm)” comments are often misleading and tend not to be altered as changes are made to the code. I know, I know, this is bad programming practice but it happens all the time — especially in commercial shops. I haven’t worked in a scientific organization so I am not qualified to comment on the pressure on code monkeys at GISS, but I have a funny feeling it is not a whole lot less (whoever the code monkeys might be, BS, MS or PhD).
Third what routine is this fragment from? Is it even called anymore. Stale code gets left in all the time, especially with compiled languages. Since stale code can result in a performance decrease in JIT compiled languages (Java, C#) it tends to be removed.
Over 25 years of working on Unix internals, NT internals, and the internals of complex commercial software, I’ve gotten pretty used to reading OPC (sort of like other peoples cigarettes) and it takes a certain talent.
However, Willis — if the code you found is still in use, the pond melt season should probably be dynamic; after all it has gotten warmer up there!
This comment may simply be an expansion of Dan’s original post. I’m writing partially because as a software professional, I find it hard to believe what I am seeing. So I write and hope that others will poke holes in my observations. I absolutely do NOT want to be correct in my first impressions…
First, I want to express appreciation for the work Gavin et al have put into this project. They’ve done a great job of packaging their massive code to make it portable, are using standard (CVS) tools to track changes, etc.
At the same time, as a software professional with more than a little experience, this entire system feels like what we charitably call “spaghetti code.” I hope I’m wrong. I have engineering/coding questions and model questions that quickly emerge after a few minutes of browsing.
Engineering/coding questions:
* Do those of you with lifetime experience coding in fortran (or R, or whatever) find it easy to read and write this code with complete understanding and assurance that bugs have not crept in? (For myself, the cryptically brief variable names mean that a single slip of a keystroke can create a subtle never-found buglet. It scares me.)
* It would be very helpful to see the rate of changes to the code, combined with quantification of the changes in results due to each code change. Anyone ever seen data even approximating this?
* Do ANY of these climatology software teams have an affiliated QA/testing group? I don’t mean those who play with the model and comment on observed anomalies. I mean professional software testers. People who write test scenarios in their sleep, who create automated test processes, who read and fix the code, who build “test stands” that run standard test sequences that verify, given same input, the impact of code changes on outputs and intermediate test points (I would accumulate total ediff and other “scaling factors” as a measure of model fuzziness).
* Does ANY model function the way I’d build one? If I were designing this, I would create a wall that makes it impossible for the model to even read the boundary/conservation values. The observer knows the boundaries; the model implements known physics and known/stated/presumed environmental factors. The model has zero permission to adjust itself to fit reality. AND, the model’s test bed self-analyzes to estimate uncertainty. (Ought to be simple: all inputs and known values can be defined with certainty levels. Then do a set of runs ranging throughout the available values and discover the impact of input uncertainty on results.)
Model Questions
Here are a few questions of interest that those of you with more GCM experience could probably answer quite readily:
* Are “tracers” simply MONITORING to ensure the model maintains consistency with “known” physical constraints, or are they used to ADJUST the model to ensure it stays within “known” boundaries?
* Do physical “conservation” routines MONITOR the model’s adherence to physical conservation limits, or are they used to ADJUST the model to fit said conservation boundaries?
The conserv.txt file discusses “observing” certain identities. That’s great according to two definitions of “observe”:
* if code is simply making calculations such that those identities are correctly “observerd”
* if monitoring code is “observing” whether those identities are actually maintained
OTOH, my quick first-pass through the code leads me to suspect that much of the model involves ajustments… forcings if you will, to “tap” (“whack”?) the values into conformity with known boundaries.
To me, that’s a 64 billion dollar Big Picture question.
I always had the impression that these systems model physical processes based on the underlying mathematics, and then calculate results based on a set of starting point inputs.
Now I suspect the systems are actually a bit of the above but largely constrained by a boundary “pipe” of “known limits to reality” that set constraints on resulting values in space and time. If true, then *every* such adjustment actually represents an error or larger-than-expected uncertainty in the modeling of the underlying physical processes.
I.e., I suggest that ALL “conservations” should be inherent to the model. ANY conservation-scaling is a measure of model uncertainty.
There are many examples of this sprinkled throughout. Just a few quickly noted:
* The energy fix noted by Dan Hughes (now at line 184 in ATMDYN_SI2000.f)
* The immediately-following scaling of WM mixing ratios “to conserve liquid water”
* From http://www.giss.nasa.gov/tools/modelE/modelE.html —
“Imposed Sea surface conditions
The simplest ocean model is one where the sea surface temperature is read in from a file, either a fixed (annually repeating) climatology, or a transient (monthly varying) realisation based on observations or previous simulations. Monthly means and end of month values are read in and a quadratic approximation used to interpolate to the (fixed) daily value at every grid point. This is constructed to ensure that the monthly mean of the daily interpolated values is equal to the specified mean. Along with SST, the sea ice concentration is also a required field. This is interpolated similarly but is constructed so that the minimum and maximum values are respected. Sea ice thickness is also fixed in these cases, but given the lack of reliable global observations, we set this based on a local scaling to the sea ice concentration.”
I have no problem with interpolating of inputs. I do have a problem with scaling the model to observed values. Either the model can calculate correctly, or it can’t. If it is unable to function without arbitrary scaling during observable time, why should anyone give it any credence when it is supposedly predicting the past or future?
After ten minutes of browsing through the documentation and code, I could not continue. The code is rife with “scaling” adjustments to fit observations or “known” conservation limits. The change list is full of discoveries having significant impact on various elements. In spite of these “corrections”, and the admitted way-off results (http://pubs.giss.nasa.gov/docs/notyet/submitted_2006_Hansen_etal_2.pdf), this model is considered to be one of the best-available models.
Just as Steve McIntyre recognized a need for professional statisticians to take a look at the work being done, I now believe that experienced software industry professionals need to take a look.
I’ll conclude with another appreciation, for the transparency demonstrated by publishing http://www.giss.nasa.gov/tools/modelE/modelE_AR4_issues.html which notes some of the model defects found since AR4. I love some of the comments there:
* “stratospheric ozone depletion over the period 1979 to 1997 was originally underestimated by a factor of 5/9.”
* “After the submission of GISS-EH data to PCMDI, an algorithmic change was made to the ocean model which significantly improved the simulations…A combination of factors related to annual mixed-layer entrainment/detrainment created biases that occasionally led to excessive ice formation in the model. This problem was fixed when salinity was replaced by ‘spiciness’ during advection…”
Bottom line:
* We can observe global climate warming over the last 100 years
* We can create a software model that can be tweaked to approximate within 30-50 percent the observed climate, particularly if we rescale a variety of factors at each step of the way.
* We know what we’re looking for, so any overly cold or overly icy results indicate a remaining bug to be found.
* As a result, we’re (90 percent?) confident this model does a good job of simulating the 20th century, and therefor is useful for predicting future climate change and the potential for humankind to better steward the environment.
What a bunch of unbleached natural organic dehydrated greenhouse-methane-source bovine fecal matter. (With apologies to our local natural food store whose products increasingly make use of ‘unbleached natural organic dehydrated cane juice’… obviously much healthier than ‘sugar’)
Willis I’m amazed that your code fragment shows hard-coded date values! So, if the pond melt season changes, someone has to wade through all that code to find where to make the hard-code changes? These should at least be parameters, or pond melt season shoud be calculated by some process internally. Amazing.
Dan, I welcome your desire to examine our code. This is obviously one reason why we make it available in the first place. I would welcome even more any constructive efforts to improve it. These codes are what they are – the result of 30 years and more effort by dozens of different scientists (note, not professional software engineers), around a dozen different software platforms and a transition from punch-cards of Fortran 66, to fortran 95 on massively parallel systems. There is always more to do – including improving documentation and implementation of improved physics. However, you do need to appreciate that these tools are developed as research platforms by scientists, not operational forecasting products. The level of software engineering support for Numerical Weather Prediction models for instance, is an order of magnitude ahead of what is available for us.
On the specific issue of ediff, it is simply the dissipation term in the KE equation and is a rather stable 2 W/m2 globally as you would see if you ran the model (in the printout it is the CHANGE of KE by DISSIP). It is not a numerical artifact. We calculate and apply it globally because there isn’t a simple way of doing it otherwise, and is too small too make any noticeable difference to any diagnostic other than the global energy balance.
In general the answer to any query about ‘how important is X’ will be most clearly found by doing two experiments, one with and one without and comparing the results. Some of those experiments we’ve done (for instance , with and without ediff), some we haven’t. If they show differences to important quantities we generally write papers about it. This is something anyone can do.
Finally, the public version of our code is around 3 years old now, and is not what is used in production runs today. This will be updated at some point soon.
> Finally, the public version of our code is around 3 years old now,
> and is not what is used in production runs today. This will be
> updated at some point soon.
Does this mean that the model runs in the IPCC’s AR4 are based on models whose source code is not available to the public ?
Gavin:
Of course others can do these things. But, when your team runs these things and the results do not show differences, do the team document that finding in some sort of NASA publication? (Similar to ORNL, or NRC reports reports? I know these aren’t the sort of things that get into peer reviewed documents, but people doing safety and nuclear work generally need to create the documents so that reviewers can review work rather than repeat it. )
These would be interesting to see if they exist.
Gavin:
Of course others can do these things. But, when your team runs these things and the results do not show differences, do the team document that finding in some sort of NASA publication? (Similar to ORNL, or NRC reports reports? I know these aren’t the sort of things that get into peer reviewed documents, but people doing safety and nuclear work generally need to create the documents so that reviewers can review work rather than repeat it. )
These would be interesting to see if they exist.
(PS. I hope this doesn’t show twice. My browser kept hanging up while I tried to post. I don’t know what’s with that.)
Gavin, thank you for the information in your comment.
You have responded with several distinct, but somewhat related, items that I would like to continue to discuss, if possible. I see the items as being, (1) availability of the NASA/GISS ModelE code, (2) a request for constructive efforts to improve the models and code, (3), the quantity ediff in the code and the coding I gave in my original post, (4) execution of the code by anyone, (5) operational forecasting products, (6) a newer version(s) of the code used for production runs, and (7) software engineering.
All corrections to any matter mentioned in the following are appreciated.
(1) Availability of the NASA/GISS ModelE code.
It is certainly true that some version of the code is available from the NASA/GISS Web site. However there is not sufficient documentation available for anyone to gain the depth of understanding of precisely what any of the coding is intended to do. Documentation both external and internal to the code is either not available or not sufficiently detailed. Based on the results of my searches for documentation, what is available is widely dispersed so as to make attempts to obtain an overall picture in a nice compact format is difficult.
The example of the internal documentation given by the ediff coding, if indeed the calculation is a proper accounting for the physical effects of dissipation, would seem to be misleading. If this is indeed a calculation of the physical dissipation why doesn’t the comment simply state that. Additional information pointing to references containing both the continuous and discrete equations for the calculation would be most helpful.
The internal documentation, it seems to me, is required to explicitly and correctly state the purpose of the coding. To label a piece of coding as a ‘fix’, when in fact it is the calculation of a physical phenomena or process, does not lead anyone to have greater confidence in the internal documentation.
(2) Request for Constructive Efforts to Improve the code
If you’ll look around this Web site (and other comments around the Web) you can easily find that my constructive efforts have been to point out that more documentation is absolutely needed.
I have also addressed the fact that the situation surrounding the AOLGCM models/codes is such that the very foundation of the scientific process cannot be applied. That is, independent verification and replication of reported results cannot be carried out.
As the situation now stands the scientific method cannot be applied to the AOLGCM codes and calculated results. It is not necessary for software to be developed by software engineers in order for the scientific method to be applied and followed.
(3) The ediff Coding
You state, ” …. ediff, it is simply the dissipation term in the KE equation …”.
In contrast to your statement of what the quantity ediff represents, the coding seems to say that it is in fact the difference between the sum of kinetic energy (KE) plus potential energy (PE) at two different states of the fluid, or stages of a calculation. Given that my reading of the coding does not agree with your statement, the following discussions might be pointless.
Here is an indisputable fact; viscous dissipation always acts to increase the thermal energy of a fluid. As shown in the coding, the quantity ediff can be both positive and negative. As also shown in the coding, ediff appears in the calculation of the temperature preceded by a negative sign. Whatever the sign convention, in order to be in accord with the physical basis of the dissipation, this means that ediff must always be negative. If it is not, the temperature will be decreased by the dissipation, and that is not correct.
If ediff is the dissipation and it is positive, the temperature, a valid measure of the thermal energy of the material, will decrease. That is an incorrect response of the fluid to the physical process of viscous dissipation within the fluid.
If sufficient documentation of both the continuous and discrete equations was available, these discussions would not be necessary. Correct comment-type documentation within the coding would also be very helpful.
Another true fact is that it is extremely difficult to back out of any code a true and correct representation of the discrete equations and their numerical solution methods. Given the state of the documentation for the NASA/GISS ModelE code, and its enormous complexity, this goes double. Nevertheless I have tried to understand exactly what ediff represents.
I have found a calculation of TE0, and it is the sum PE and KE as calculated for TE, but at a different state of the fluid, or a different calculational stage, than for TE. I have not been able to determine if the reference state for TE0 is at the beginning of the time step, the beginning of the overall calculation, or the beginning of an iterative procedure.
Whatever the case, I have not been successful in showing that the physical process of viscous dissipation can be calculated by subtracting these two sums. Consider the following example. The continuous equation for the sum of KE and PE contains on the right hand side accountings of bulk transport and the interconversions of the various forms of pressure and viscous work into thermal energy
Lets say that TE0 is represented by
TE0 = TE0_1 + TE0_2 + TE0_3,
and TE is represented by
TE = TE_1 + TE_2 + TE_3
Where the various terms on the right-hand sides are the accountings of the various transport and interconversions of the various forms of energy.
Subtract these and we get
TE – TE0 = TE_1 – TE0_1 + TE_2 – TE0_2 + TE_3 – TE0_3
A single contribution of the right-hand-side terms cannot be extracted by this process. Only the net effects of all the transport and interconversion processes is obtained. Importantly, the net effects also include errors in the truncation approximations and any errors due to lack of convergence of the discrete equations to the continuous equations.
It would be very helpful in this discussions if we can agree on a common reference textbook(s) with citations to specific equations. This would allow independent verification of the model equations in the continuous domain. Independent verification and replication of results are the very foundation of the scientific, and engineering, process.
Citations to papers and reports or textbooks that cite exactly what continuous equations are being used would have made this entire discussion unnecessary.
(4) Execution of the Code by Anyone
It is more than a bit of a stretch to imply that anyone can download the code and perform investigations into the models and methods. Again the first stumbling block is the lack of sufficient documentation. I suspect that ultimately computing power will be the limiting process.
Equally important for such complex models/codes, it is very unlikely that the code would be correctly used. Deep expertise in use and application of the code is needed in order to correctly apply and understand the calculated results of the application processes.
Correct application of these codes and correct understanding of the calculated results demands that they not be employed as black-box software. This is especially true when the end-results of the analyses of the applications are intended to provide input to persons and organizations that will potentially make public-policy decisions based on the calculations.
And while actual use of the code is out of reach, sufficient documentation should be readily available so that deep understanding of many aspects of the models, methods, and coding is possible. Such details include development of the modeling and associated continuous equations, the discrete approximations to the continuous equations, the numerical solution methods for the discrete equations, the details of the structure of the code, and even the details of each and every calculation in each and every routine in the code.
(5) Not Operational Forecasting Products
I hope we don’t get sidetracked into discussions of ‘forecasting’, vs. ‘projections’, vs. ‘predictions’, vs. ‘what-if’, etc. It is a fact that the results from AOLGCM codes are an integral part of the IPCC reporting process. Entire sections of the reports are devoted to discussions of the models and results from calculations with the models. If any of the analyses and calculations of any AOLGCM models/codes are used to develop the conclusions of the IPCC reports, the self-imposed limitation to only peer reviewed publications should be doubly applicable to the AOLGCM models and codes.
A nice little loophole has been created, whether implicitly or explicitly I don’t know. The scientific journals associated with the climate-change community will accept papers for publication the basis of which are calculations by computer software that has not been peer-reviewed. So, AOLGCM-based papers get peer-reviewed and published and then cited in the IPCC reports. This is not correct because it bypasses the independent verification and replication processes of the scientific method.
(6) Newer Versions of the Code
The last two sentences in your comment only provide additional support that the scientific method is not being followed relative to the AOLGCM models, methods, and software. We have been directed to investigate a part of the AOLGCM contributions to the climate-change community that apparently is no longer in use. Additionally, equally apparent is the fact that the models/methods/codes used in the AOLGCM contributions to the most recent reports have not been available to anyone so that peer-review, independent verification, and replication can be conducted.
(7) Not Developed by Software Engineers
Almost all successful software designed and applied to analyses of inherently complex physical phenomena and processes have not been developed *by* software engineers. But, very importantly, the very best of such software has been designed and developed by scientists and engineers with intimate involvement of software engineers and with the application of best-practices results from the software-engineering discipline.
But, again, this is not the major issue. That issue remains to be the fact that the basic foundation of the scientific process has not, and more importantly, cannot be applied to the AOLGCM model/methods/codes in the climate-change community.
Mr. Pete,
I’m not a huge fan of these models over all; I have “issues” with the way some of these things are documented and communicated. On the other hand, I’m not in the “models are useless for anything and everything camp.” I think I can comment on some of your questions and suggest that the answers are “at least in this regard the model is ok.”
On this
“Tracers” is often the term modelers use to describe any passive scalar which may include Temperature, salt, H20 or anything else that’s not homogeneously distributed in the atmosphere or ocean. Used this way, “tracers” aren’t something stuffed in to ‘monitor’ accuracy or ‘adjust’ behavior. They are just a specific sort of variable.
I’d assume the conservation routines are the coded up versions of the transport equations used to apply the known physical principles. They neither “monitor ” nor “adjust” the model; they are the model. (Or, at least part of it. Boundary conditions are always required; initial conditions are required for transient problems.)
What you are suggesting isn’t possible. When studying transport of anything, a model must “see” boundary conditions somewhere. For example, if you model fluid flow in a pipe, the model must “see” the pressure far upstream, far downstream, and something must be imposed at the pipe surfaces. It makes sense to criticize a code for imposing incorrect boundary conditions, or using poor approximations, but you can’t make the boundary conditions invisible to “the model”. The boundary conditions are part of the model.
===
As a extra comment: I suspect your concern about boundary condition arises from the discussion of the GISS treatment of the ocean in some of the models? My impression is that all Gavin is describing is the fact that some versions of their code use models that don’t compute ocean circulation. The replace the ocean with “known” boundary conditions acting on the atmosphere.
The existence of these simpler models is not a problem per-se. Sure, in principle if you are modeling the whole planet, it would be better to compute the ocean circulation– if you could do it correctly– but the fact is either you a) compute the ocean behavior or b) assume the ocean circulation is know and you impose that as a boundary condition. Still, my impression is that some people do certain numerical experiments to test the importance of a new parameterization (say for clouds). During preliminary testing, someone might want to do computations holding the ocean constant and then see what happens. This would be a perfectly legitimate thing to do during code development, and it’s nice that the GISS code has that option available.
The “problem” with the ocean being treated as a boundary condition only arises if it’s treated this way when one tries to study a problem where you need to actually model the ocean. I suspect this would be important if you are trying to do any remotely long term forecasts, but otherwise, the option gives scientists some flexibility to study something in detail without worrying about the computational burden of computing “everything”.
(1) Availability of the NASA/GISS ModelE code.
> Documentation both external and internal to the code is either not available or not sufficiently detailed.
True. Creating good documentation takes time and effort, and unfortunately the scientists are not paid to do that. You won’t be impressed that the situation now is significantly better than it used to be, however, even perfect documentation does not impart understanding – that is always going to take work.
> If this is indeed a calculation of the physical dissipation why doesn’t the comment simply state that. Additional information pointing to references containing both the continuous and discrete equations for the calculation would be most helpful.
If the dissipation isn’t included (which it wasn’t for a long time) it would be a small error that was ‘fixed’ by this code fragment. I doubt you really need a reference to the continuous and discrete versions of the conservation of energy!
(2) Request for Constructive Efforts to Improve the code
> more documentation is absolutely needed.
Thanks, but that is well known. Bears repeating I suppose.
> the situation surrounding the AOLGCM models/codes is such that the very foundation of the scientific process cannot be applied. That is, independent verification and replication of reported results cannot be carried out.
Absolutely untrue. You can redo almost any reported experiment and test the sensitivity of the results. You can download CCSM3 and do the equivalent experiment and see whether the same phenomena occurs, you can test predictions against real world data.
(3) The ediff Coding
> You state, � …. ediff, it is simply the dissipation term in the KE equation …�.
Slight mis-statement. It is the term for the dissipation of KE in the temperature equation.
> the coding seems to say that it is in fact the difference between the sum of kinetic energy (KE) plus potential energy (PE) at two different states of the fluid, or stages of a calculation.
No contradiction. The different stages are before and after the momentum/advection equations are dealt with. Any loss in total energy, associated principally with KE dissipation, is turned to heat.
> viscous dissipation always acts to increase the thermal energy of a fluid. As shown in the coding, the quantity ediff can be both positive and negative.
The coding does not tell you the sign. ‘ediff’ in fact is always negative (which leads to an increase in temperture). The sign convention is arbitrary.
> Citations to papers and reports or textbooks that cite exactly what continuous equations are being used would have made this entire discussion unnecessary.
Energy is conserved. $d/dt (\int E dA) =0$ ; $ d/dt \sum E_{ij} = 0$.
(4) Execution of the Code by Anyone
It is difficult, but not impossible, and it isn’t just amateurs doing it. It works best when they keep us appraised of their ideas and we keep them up-to-date with fixes and suggestions. I disagree that good documentation is a substitute for using the model. You would have worked out the importance (or not) of ediff in much less time than these postings if you’d just run the model and put in some print statements and looked at the diagnostic printout.
> Such details include development of the modeling and associated continuous equations, the discrete approximations to the continuous equations, the numerical solution methods for the discrete equations, the details of the structure of the code, and even the details of each and every calculation in each and every routine in the code.
Fine. Please inform our funders of the need to double our budget.
(5) Not Operational Forecasting Products
This is a distinction not between projections and forecasting, but a disinction between research and operations. One is a science, the other is engineering (roughly, though obviously the distinction is not clear cut). It is conceivable that climate forecasting could be made into an operational unit (a National Climate Service), but as it stands all climate model devlopment groups are research based – we do not have a mandate to produce ‘products’. Compare that to the National Hurricane Center which does.
> The scientific journals associated with the climate-change community will accept papers for publication the basis of which are calculations by computer software that has not been peer-reviewed. So, AOLGCM-based papers get peer-reviewed and published and then cited in the IPCC reports. This is not correct because it bypasses the independent verification and replication processes of the scientific method.
This is simply not the case. No complex code can ever be proven ‘true’ (let alone demonstrated to be bug free). Thus publications reporting GCM results can only be suggestive. If a result is interesting it gets looked at by other model groups to see if it’s robust. Thus a background of replicated results gets built up and sense of what is and what isn’t ‘bankable’. This is much more useful than showing that any one code is ‘correct’ since it can never be shown to complete. With limited resources, this is the approach that the field has taken.
(6) Newer Versions of the Code
> We have been directed to investigate a part of the AOLGCM contributions to the climate-change community that apparently is no longer in use.
Model development is continuous. We released the ‘frozen’ version that was used in the IPCC AR4 runs. All of the results currently being published use physics as coded in that version. However, newer runs will use updated code, that is inevitable. Changes will be documented in the literature.
Science using GCMs does work differently from that in other fields. There is no agreed upon set of equations that completely cover the problem and the need for parameterisations of unresolved processes means that there never will be. Thus the evaluation of models will always be against real world data, not against pure theoretical solutions.
Hello again Gavin,
For item (3) I’m still confused, but maybe I have made some progress. I have come to the conclusion that the complete equation for conservation of KE has not been used. And the dissipation is ‘backed out’ of an approximate equation for KE. Given the velocity distribution solution from the momentum equations I don’t understand why this is necessary.
A complete equation for conservation of KE requires accounting on the RHS for (1) bulk flow of KE, (2) work done by pressure of surroundings, (3) rate of reversible conversion to internal energy, (4) rate of work done by viscous forces, (5) rate of irreversible conversion to internal energy (dissipation), and (6) rate of work done by gravity. See Aris, for example. I think this equation would usually be solved for the change in KE and not used as an equation for the viscous dissipation. The viscous dissipation must be consistent with the symmetric stress tensor used in the momentum equations for the fluid flow. If the momentum equations solutions are available why not simply calculate the dissipation? Equally important, when a phrase like ‘conservation of energy’ is used it should always be a reference to the fundamental form for which energy is in fact conserved. Approximate forms do not conserve energy.
If I have correctly followed your comments, and the coding, the viscous dissipation in the ModelE model is approximated by
Diss = (KE_1 + PE_1) – ( KE_0 + PE_0 ), where _1 is a final state and _0 is an initial state.
Only items (5) and (6) in the above list are retained, all others are neglected. Conservation of KE cannot be satisfied if only parts of the correct equation are retained. Additionally I’ll still say that the calculation in the code accounts for both the approximate explicit parts retained plus implicit contributions due to numerical solution methods.
That this is not a complete measure of the viscous dissipation is shown by the fact that it gives the wrong answer for all flows for which the change in KE and PE are zero. There can be viscous dissipation into internal energy when the change in KE and PE are zero. Some of the terms that have been dropped from the complete KE conservation equation make it possible.
I also have not been able to understand how KE+PE can be a negative-definite (for the ModelE sign convention), monotonically decreasing function of time for general, transient flows of compressible materials. Of course the coding does not tell me the sign used for the dissipation. But the coding does allow for the possibility that ediff can be either positive or negative.
I have seen the range of dissipation seems to be from about 2 W/m2 to about 15 W/m^2. Have experiments been conducted to validate the model?
Maybe I’ll get back to this if I can find some papers and reports that develop the approximations used in the codes from complete statements of the continuous equations. I find that explicit statements of assumptions and approximations and the steps in the processes used to arrive at the final forms of the continuous are very helpful
(5) I have never used the words ‘true’, ‘bug free’ ‘correct’ or ‘complete’ (actually I have used ‘correct’, but not in the sense ‘proof of correctness’). No one who has been part of a software V&V and SQA project for any non-toy model/code has ever used these words either. Generally it seems that those seeking an exemption for their particular software project are the first to mention these concepts. Sometimes they also throw in something about the squelching of creativity, trampling of personal devotion to excellence, and smashing of creative coding too. An accepted and proven procedure for conducting V&V and SQA for real-world, evolved models/codes is outlined in the report linked in this post.
So far as I can tell, there has never been an independent, formal, documented V&V and SQA audit of any AOLGCM code as these procedure are understood to mean in the software-development world.
(6) When a paper is reviewed for publication, I think reviewers carefully check the source of where the numbers presented in the results come from (I always did). Reviewers do not pass over the equations and all other necessary parts of the problem statement as something of secondary importance simply because it looks like something that has been previously published. Direct verification of the problem statement and its solution are integral parts of the peer-review process.
Numerical solution methods are notorious, and rightly so, for producing numbers that might in some rough approximate way satisfy the discrete equations and yet at the same time have no relation to the solutions of the continuous equations. The actual source of the numbers is of great importance, and it is this source that must be directly verified by persons independent of the author of the work under review. The same process applies to software. Comparison of computer-code output does not represent verification, validation, or SQA. And most certainly it does not provide information relative to ‘truth’, ‘bug freeness’, ‘correctness’, or ‘completeness’.
There are journals published by professional engineering societies that will not accept papers for publication given absence of direct demonstration of Verification of the numerical solution methods. The science journals have given a free pass to papers that are based on numerical calculations. If they had not, there would be requirements that the calculations be directly demonstrated to be actual solutions of the continuous equations. A paper based on a single calculation on a single grid (with the exact same code) would be rejected out of hand and returned to the authors.
[…] Chaotic Response is Numerical NoiseTom Vonk on A Short Summary of Future DiscussionsDan Hughes on A GISS ModelE code fragmentReid on This looks […]
Pingback by Models, Methods, Software » Blog Archive » Dissipation of Fluid Motions into Thermal Energy | June 3, 2007 |
[…] http://www.ecd.bnl.gov/steve/pubs/HeatCapacity.pdf Climate Study Results in Bold Headlines, Claims Models, Methods, Software » Blog Archive » A GISS ModelE code fragment Climate Audit – by Steve McIntyre » East Anglia Refusal Letter Climate Audit – by Steve McIntyre » […]
Pingback by Follow the Money.... - Debate Politics Forums | September 26, 2007 |
Hi Dan
I hope you don’t mind me dropping this in here, it seemed an appropriate place to put it.
There is an article in this months Physics World that will probably make you chuckle, “Fortran at 50”, discussing how the HadSM3 climate model consists of “over one million lines of Fortran code”. It adds a quote at the end:
If a researcher is experimenting, I can’t imagine Fortran being easier to use than something like Matlab or R. And if you’re writing a million lines of code then you really need to be getting software engineers involved if you want any hope of getting something with a fairly low level of bugginess. The difficulty in writing reliable software goes far beyond just learning the syntax of the language. Whilst I have no major problems with Fortran per se, it is the naivety of the problems of producing software and complex models within the scientific community that stands out in this article.
http://physicsworld.com/cws/article/print/31912
Thanks for the pointer, Spence.
A million lines of code and no software engineering is not a pretty picture.
It is very likely, with high confidence, that the resulting code won’t be pretty either.
Re #18: the attachment to Fortran is bizarre in my opinion. The idea that anyone anywhere is still taking up Fortran as a first language pervades the ludicrous Fortran textbooks. Nobody takes up Fortran unless they are maintaining something crufty that runs on a supercomputer. (Yet as far as I know there is no introduction to Fortran book for seasoned professionals in print! I have to work out my issues from cloying texts written for undergraduates as if nothing had happened in computing since about 1968). The idea that it is easier to learn than Matlab or Python can only be advanced by people who only know one language; it is in fact the most baroque and confusing language out there.
Re #9, alas, yes. I am as uncomfortable as any of you with nonmilitary publicly funded codes being closed and simply can’t fathom any reason for it. I do hope all of you are as strident advocates for open source elsewhere in the public sector as you are in climate research.
I suppose you won’t take my word for it that there are all sorts of pressures extraneous to the scientific community to keep science codes closed. The best solution is to have the grant agencies specify an open source license in their proposal calls. I’ll be Microsoft will be thrilled by that.
That aside, many of the demands you all are making, however reasonable (and some of them are very reasonable), are expensive, and climate work (other than remote sensing which eats up the bulk of the big numbers often quoted) has always been run on a shoestring, as Gavin rightly points out.
I wonder whether most of the readers of this blog will support the increase in funding required to get this thing done right from top to bottom. Dan has already expressed himself opposed to a rewrite, calling the idea “naive”, though I don’t really understand why, being naive and all myself.
I promise you that doing it right will not make global warming go away, though, so if that’s what you’re after you probably shouldn’t advocate for ponying up.
Michael, there isn’t a single post here in which I have expressed any opinion about ‘global warming’.
And I did not express myself as being opposed to a rewrite.
I have always been responsible for obtaining funding for the projects on which I work. Why do you and Gavin think that it’s my (or our collective) responsibility to obtain funding for you. Plus as a practical matter exactly how would that work? If I help you write the proposals do I get a cut of the work action?
More later, maybe.
I hate to bump a really old thread. I apologize! But I’m not clear on the units. Are the units of ediff J/m2 or W/m2? Both are mentioned above. Secondly, if the units are J/m2, what is the time step used?
Ryan O, I have posted additional discussions of the units here.
I still don’t have it all figured out.
[…] ran across a ModelE code segment that purported to be the viscous dissipation as mentioned in this post. There is a some discussion with NASA / GISS employee Gavin Schmidt there in the comments. That led […]
[…] A GISS ModelE code fragment […]