Models Methods Software

Dan Hughes

Dissipation of Fluid Motions into Thermal Energy

I have been looking for information on this subject without much success. I have contributed these notes to a thread at Climate Science (the site appears too be down at this time, I’ll add a link later) and at Climate Audit. Viscous dissipation has also been the subject of this post. To date I have gotten very little feedback of any kind. Maybe all this is incorrect, but no one has said that either. All pointers to literature relating to the following will be appreciated.

To be clear I am referring to the conversion of fluid motions into thermal energy via the viscous shear-stress terms in momentum balance equations. These are momentum diffusion contributions to the momentum balance equations. The thermal energy due to viscous dissipation appears as a positive-definite contribution to the various forms of the thermal energy conservation equation. Viscous dissipation always acts to increase the thermal energy of the fluid. If a temperature representation is used as the thermal energy equation, it always acts to increase the temperature.

Viscous dissipation is a volumetric process occurring at all times so long as fluid motions are present. The process is constantly acting to increases the thermal energy content of the fluid and thus increase its temperature.

In contrast I am not referring to the explicit and implicit viscous-like terms that arise from, and sometimes added to, the discrete approximations to the continuous form of the momentum equations. Somewhat ironically, these terms seem to be frequently labeled as ‘momentum dissipation’. The label momentum dissipation seems to be used in the GCM world more than in other computational fluid dynamics applications. I think it is a good assumption that the viscosity-like coefficients that are used for momentum dissipation are not used to calculated the viscous dissipation contributions to thermal energy equations.

It is of course true that these momentum dissipation additions to the momentum balance equations have an indirect effect of the viscous dissipation to the extent that they modify the velocity distributions and gradients in the flow. These latter are the correct terms for calculating the viscous dissipation.

Modeling and calculation of the viscous dissipation and consequent thermal energy addition in GCM models has a somewhat checkered history. This is due in part to the evolutionary nature of the models and changing application areas. More nearly complete and comprehensive accounting of the components of, and physical phenomena and processes occurring in, the climate system have generally developed over decades of time. Applications to calculations of the thermal history of the planet over hundreds of years has required that the energy-conservation aspects of the modeling be fundamentally sound and theoretically correct. However, a large contribution to addressing the fundamentally sound and theoretically correct model has been the approximations made at the continuous-equation level of the modeling. The momentum balance equations used in the models are simplified versions of the complete equations. More specifically, the thin-atmosphere approximation on a spherical surface, the representation of surface drag, the no-slip condition at land-atmosphere interfaces, the corresponding boundary condition at ocean-atmosphere interfaces, and the decomposition of the velocity into horizontal and vertical fields has also contributed to the problem. While calculations and analyses with the GCM models/codes have been carried out over four or five decades, it seems that only late in the 20th century and early in the 21st century have the problems with the modeling and calculations of the viscous dissipation been corrected in some of the models/codes. Two somewhat recent discussion have been given by, Boville and Brethron and Becker.

Again this situation is most likely a reflection on the interests in carrying out calculations for 100s of years of time.

It is my understanding that the global-average volumetric viscous dissipation in the atmosphere is calculated to be equivalent to about 2W/m^2 of energy; and I have seen much higher values. I do not know if there are estimates available from measured data in the atmosphere. A very wide spread has appeared in the literature over the years. This conversion of fluid motions into thermal energy has occurred for as long as the present composition and motions in the atmosphere have been roughly equivalent to the present-day conditions.

The standard argument is that this is a small number relative to the other energy-addition contributions to an energy balance for the planet. However, the radiative-equilibrium argument means to me that, as equilibrium is approached few energy additions can be consider to be a small number and neglected. Almost all finite numbers will not satisfy attempts to make 0 = 0. As I understand the situation, the effect of doubling of CO2 in the atmosphere is equivalent to about 1.6 W/m^2 and that the effects of the consequent changes in the thermal-energy state for the planet will be easily measured and observed in time spans of only 100s of years. How can an equilibrium-based approach to descriptions of the thermal-energy state of the planet neglect the viscous dissipation if it is of the same order as the assigned imbalance?

I think another important issue is related to the development of the continuous equations used in CGM models/codes. Taken as a whole these equations are known to be incomplete. The basic-equation models for the fluid motions and thermal state are not the complete equations that describe the motions and energy conservation. The all-encompassing parameterizations, many of which deal with mass and energy sources and sinks and interchanges across sub-system interfaces, are ad hoc/heuristic, best-expert-approximations (EWAGs) and thus cannot be assured of complete accounting of the mass and energy balances for the processes that are parameterized. In summary, the continuous equations very likely do not accurately account for the mass and energy conservation that actually occur in the physical system. I suspect it is easily possible for the lack of completeness and complete understanding to be responsible for several W/m^2 difference between the model equations and physical reality.

Is it not possible that the differences between the model equations and the actual physical phenomena and processes incur errors on the order of a few W/m^2. Again this might be a small number relative to the macroscopic energy balance for the planet, but as equilibrium is approached, and the imbalance is accumulated over 100s years of time in a calculation, significant differences are very likely possible. It is an important issue that the level of incompleteness and imbalances in the modeling at the continuous-equation level, relative to physical reality, must be significantly less than the physical imbalances that are driving the planet toward a new equilibrium state.

Finally we come to, as we always do, the fact that the numbers are the results of numerical solution methods. In order to ensure that strict accounting and conservation of the energy distributions within the system requires extremely close attention to how the numerical methods are developed and implemented. An example of how easily it is to overlook important details is given by the usual practice of numerically integrating different parts of a model system using different time steps. A related issue is calculations using parallel-computing capabilities by various approaches to domain de-compositions. Exchanges of mass and energy at interfaces between subsystems presents another opportunity to overlook mass and energy conservation requirements. Generally these must be evaluated at the same time-step level in order to ensure strict conservation.

It is important to note that while there are many ways to account for the mass and energy conservation of a given calculation, this process in no ways ensures that the calculations are in accord with and reflect the actual mass and energy balances and conservation in the physical phenomena and processes. However it is an important issue that the numerical imbalances must be significantly less than the physical imbalances that are driving the planet toward a new equilibrium state.

As a stable equilibrium state, the radiative equilibrium state for example, is approached, no imbalance can be counted as small and dismissed. The imbalance between physically reality and the continuous model equations is almost certain to be a genuine problem. The effects of the viscous dissipation, constantly acting to increase the thermal energy of the atmosphere is physical reality. I am uncertain of the actual physical value. The imbalances introduced by numerical solution methods is very likely a problem in some GCM models/codes. This problem has been discussed as recently as 2003. Small imbalances acting constantly over long periods of time cannot be ignored as equilibrium states are approached.

Here is a curious side issue that was discussed in this olde paper by H. A. Dwyer from 1973 located here. The results given in the paper indicate that the effects of his estimate of the power generation activities by humans can easily be seen in the calculations with his model. He used 15.0 x 10^18 BTU/yr (1.58 x 10^22 Joules/yr) as the ‘heat generation’ by mankind over a period of 100 years. Energy conversion activities by humans is another one of those processes that is constantly occurring and adding energy into the climate system

Worldwide energy consumption by the human race is over 446 Quadrillion BTUs at the present time. This is equivalent to 131,400 TWhr or 471,000 PJ (= 10^15 J) per year. If we take an average efficiency to be 33%, the total energy conversion is about 3 times the consumption, or 1,413,000 PJ per year = 1.413 x 10^21 J/year. This is within a factor of 10 of the value used by Dwyer. (While we consumed about one-third of the total conversion, all the energy converted will always reside in the climate system until it is lost to space.) Can this be another source of internal energy conversion that cannot be ignored over long time scales as an equilibrium state is approached.

The thermal state of the planet, as measured by the temperature, is a strong function of the thermodynamic processes occurring within the climate system. The temperature distribution near the surface is determined by the transport and storage of the energy additions to the system.

Chaos and Butterflies
Dissipative systems (physical or mathematical)will have several attractors (assuming they exist). The typical Lorenz-like ODE systems are dissipative, and conserve energy in the dissipationless limit. Thus these systems, the original 1963 and the later 1984/1990 systems are examples, have more than a single attractor. The dissipative and energy-conserving-in-the-dissipationless-limit properties are generally considered necessary in order for systems of ODEs to be Lorenz-like.

The oscillatory/periodic-like/aperiodic response seen in calculational results from these systems remains bounded due primarily to the linear terms on the right-hand sides of the equations. The plots of the dependent variables from Lorenz-like ODEs ‘look’ like bounded numerical instabilities. These are damping terms in the equations; the resistance offered to fluid motions, for example. If the coefficients on these terms are increased slightly from the usual default values of unity, the system can be shown to become a little under-damped to massive over-damping with the trajectories smoothly approaching equilibrium states. The effect is very dramatic on graphical plots.

It is possible to find values of the coefficients that produce periodic responses having almost no change in frequency or amplitude. And some values will bring the initial trajectory motion away from the initial point to a screeching halt at a new equilibrium point. In general, the chaotic response properties are lost and deterministic predictability returns.

I suspect, but haven’t yet done any calculations, that coefficient values less than unity might lead to unbounded responses. The non-linear terms in the equations might also provide contributions that assist in maintaining bounded-ness. It would also be of interest to investigate the effects of modeling the momentum-equation resistance as a non-linear function such as in turbulent flows.

The numerical solution methods used in NWP and AOLGCM models/codes have both implicit and explicit numerical damping in addition to the physical damping contained in the basic-equation models for mass, momentum, and energy for the fluid motions. These systems are dissipative and might be energy-conserving in the dissipationless limit. I do not know that if the ‘momentum dissipation’ terms are not present the codes can even maintain bounded responses. The effects of the numerical damping relative to the ‘chaotic response’ properties of the continuous equations are not known, of course.

When performing an initial-value sensitivity analysis, specified initial conditions are varied and the response of the calculations are observed. It is not possible a priori to know to which attractor for a dissipative system a given trajectory will approach. It seems that this property means that even long-range calculations of responses cannot be assumed/hypothesized to be reliable. Again, all this assumes that attractor(s) exist.


June 3, 2007 - Posted by | Uncategorized |


  1. Here are a few more references about dissipation and power in the atmosphere. Some are classics, some more recent.

    J. M. Gordon and Y. Zarmi, “Wind Energy as a Solar-Driven Heat Engine: A Thermodynamic Approach”, American Journal of Physics, Vol. 57, No. 11, pp. 995-998, 1989.

    An upper bound on annual average energy in the Earth’s winds is calculated via the formulation of finite-time thermodynamics. The Earth’s atmosphere is viewed as the working fluid of a heat engine where the heat input is solar radiation, the heat rejection is to the surrounding universe, and the work output is the energy in the Earth’s winds. The upper bound for the annual average power in the Earth’s winds is found to be 17 W/m^2, which can be contrasted with the actual estimated annual average wind power of 7 W/m^2. our thermodynamic model also predicts the average extreme temperatures of the Earth’s atmosphere and can be applied to wind systems on other planets.

    F. K. Ball, “Viscous Dissipation in the Atmosphere”, Journal of Meteorology, Vol. 18, pp. 553-557, 1961.

    Estimates of atmospheric viscous dissipation at various heights from near the ground to within the stratosphere have been published. All of these results, that are known to the author, are presented on a single diagram together with some new estimates made from the wind records of ordinary Sheppard-type cup anemometers mounted on a radio mast. These latter results confirm that the viscous dissipation is proportional to the cube of the wind speed at a given site and heights (for heights below about 100 m). The big discrepancy between estimates based on the similarity theory of diffusion and other estimates is discussed.

    Alexis De Vos and Gust Flater, “The Maximum Efficiency of the Conversion of Solar Energy into Wind Energy”, American Journal of Physics, Vol. 59, No. 8, pp. 751–754, 1991.

    In the present paper, the Gordon and Zarmi model is applied for the conversion of solar energy into wind energy, in such a way that simple calculations lead to a universal results: The upper bound of the conversion efficiency of solar energy into wind energy equals 8.3%.

    Hugh W. Ellsaesser, “A Climatology Of Epsilon (Atmospheric Dissipation)”, Monthly Weather Review, Vol. 97, No. 6, pp. 415-423, 1969.

    Kolmogorov’s structure functions for the longitudinal and transverse components of locally homogeneous isotropic turbulence are combined vectorially to obtain an expression which permits the evaluation of epsilon (atmospheric dissipation rate) from climatological data. This is used to derive climatological patterns of Epsilon in the free atmosphere from Crutcher’s upper wind statistics of the Northern Hemisphere. The latter are combined with Kung’s boundary layer values to estimate the distribution of total atmospheric dissipation over the Northern Hemisphere.

    Ernest C. Kung, “Kinetic Energy Generation And Dissipation In The Large-Scale Atmospheric Circulation”, Monthly Weather Review, Vol. 94, No. 2, pp. 67-82, 1966.

    The kinetic energy budget and dissipation are studied in their various partitionings, using daily aerological (wind and geopotential) data from the network over North America for six months. The total kinetic energy dissipation is partitioned into vertical mean flow and shear flow and also into planetary boundary layer and free atmosphere. Furthermore, the dissipations in the vertical mean flow and shear flow are partitioned separately into components contributed by the boundary layer and free atmosphere. Two important terms in the total kinetic energy equation in determining the total dissipation are the generation and outflow. Two important terms in the mean flow kinetic energy equation in determining the mean flow dissipation are the conversion between the vertical shear and mean flows and the outflow. The mean flow and shear flow dissipations seem to have numerical values of the same order of magnitude. The evaluated boundary layer dissipation and free atmosphere dissipation indicate that the latter is at least as important as the former. It is also shown that the mean flow dissipation is mainly contributed from the free atmosphere while the shear flow dissipation is contributed from the boundary layer and free atmosphere in the same order of magnitude. The evaluated dissipation values and related kinetic energy parameters are presented and examined in detail. Of special interest in this study is the direct evaluation of the kinetic energy generation due to the work done by the horizontal pressure force. Daily variation of the generation at different pressure levels seems to suggest three different modes of the generation cycle in the upper, mid, and lower troposphere. Clear vertical profiles of the generation from the surface to the 100-mb. level are obtained; it is shown that strong generation takes place in the upper and lower troposphere while the generation in the mid troposphere is very weak. It is also suggested that there may be an approximate balance of the kinetic energy generation and dissipation in the boundary layer.

    Ernest C. Kung, “Large-Scale Balance Of Kinetic Energy In The Atmosphere”, Monthly Weather Review, Vol. 94, No. 11, pp. 627-640, 1966.

    The vertical distribution and seasonal variation of the kinetic energy balance of the atmosphere are studied. From 11 months’ daily wind and geopotential data during 1962 and 1963 over North America, the generation due to the work done by the horizontal pressure force, the local change, the horizontal outflow, and the vertical transport are evaluated for 20 pressure layers from the surface to 50 mb. The dissipation is then obtained as the residual to balance the kinetic energy equation. They decrease gradually to a minimum in the mid-troposphere, increase again to the second maximum in the upper part of the atmosphere, then decrease again farther upward. The generation and dissipation are approximately balanced in the lower troposphere, particularly in the boundary layer, for the large-scale domain of analysis. The generation and dissipation of the kinetic energy are significantly large both in the lower troposphere and in the upper part of the atmosphere. However, in view of the amount of the kinetic energy contained in different portions of the atmosphere, the energy generation and dissipation are most intense in the lower troposphere, especially in the boundary layer. The efficiency of the dissipation in different portions of the atmosphere is also examined in terms of the depletion time. The depletion time is orders of magnitude shorter in the boundary layer than in the mid-troposphere. A seasonal change of the energetics is depicted for the one-year period by means of the pressure-time cross sections.

    Lloyd L. Schulman, “A Theoretical Study of the Efficiency of the General Circulation”, Journal of the Atmospheric Sciences, Vol. 34, No. 4, pp. 559-580, 1977.

    The hypothesis that the atmosphere may be constrained to operate at nearly maximum efficiency is examined. If atmospheric efficiency is defined as the ratio of the rate of production of kinetic energy to the rate at which solar energy reaches the top of the atmosphere, the problem becomes equivalent to finding the maximum rate at which diabatic heating generates available potential energy (APE)., which can be estimated independently of any frictional processes. Since diabatic heating includes long- and shortwave radiative heating, the vertical flux of sensible heat by the small-scale eddies and the release of latent heat, this would entail finding the maximizing fields of temperature, water vapor, carbon dioxide, ozone, cloudiness, and surface wind speed. By specifying the relative humidity to be constant and less than 100%, by ignoring ozone as an atmospheric constituent, and by using the observed, mixing ratio of carbon dioxide as basic simplifying assumptions, the release of latent heat and clouds are eliminated, and for a specified solar forcing the efficiency becomes a function of the temperature field alone.
    Observational studies indicate that the actual rate of APE is from 2-6 W m^-2, which corresponds to an atmospheric efficiency of about 1-2%. Experiments with a 5-level, 5-latitude model yields a maximum generation of APE near 12 W m^-2. A higher resolution5-levelm 9-latitude model leads to a maximum generation of near 10 W m^-2. The corresponding maximizing temperature field, including horizontal temperature gradients whose magnitudes decrease with height and the absence of superadiabatic lapse rates. The results are relatively insensitive to relative humidity, albedo or surface wind speed, but do have a strong dependence on the sensible heat distribution scheme. These solutions suggest that the general circulation may indeed be operating at nearly its maximum efficiency.

    A. Heitor Reis and Adrian Bejan, “Constructal Theory of Global Circulation and Climate”, International Journal of Heat and Mass transfer, Vol. 49, No. 11-12, pp. 1857-1875, 2006.

    The constructal law states that every flow system evolves in time so that it develops the flow architecture that maximizes flow access under the constraints posed to the flow. Earlier applications of the constructal law recommended it as a self-standing law that is distinct from the second law of thermodynamics. In this paper, we develop a model of heat transport on the earth surface that accounts for the solar and terrestrial radiation as the heat source and heat sink and with natural convection loops as the transport mechanism. In the first part of the paper, the constructal law is invoked to optimize the latitude of the boundary between the Hadley and the Ferrel cells, and the boundary between the Ferrel and the Polar cells. The average temperature of the earth surface, the convective conductance in the horizontal direction as well as other parameters defining the latitudinal circulation also match the observed values. In the second part of the paper, the constructal law is invoked in the analysis of atmospheric circulation at the diurnal scale. Here the heat transport is optimized against the Ekman number. Even though this second optimization is based on very different variables than in the first part of the paper, it produces practically the same results for the earth surface temperature and the other variables. The earth averaged temperature difference between day and night was found to be approximately 7 K, which matches the observed value. The accumulation of coincidences between theoretical predictions and natural flow configuration adds weight to the claim that the constructal law is a law of nature.

    Adrian Bejan and A. Heitor Reis, “Thermodynamic Optimization of Global Circulation and Climate”, International Journal of Energy Research, Vol. 29, pp. 303–316, 2005.

    The constructal law of generation of flow structure is used to predict the main features of global circulation and climate. The flow structure is the atmospheric and oceanic circulation. This feature is modelled as convection loops, and added to the earth model as a heat engine heated by the Sun and cooled by the background. It is shown that the dissipation of the power produced by the earth engine can be maximized by selecting the proper balance between the hot and cold zones of the Earth, and by optimizing the thermal conductance of the circulation loops. The optimized features agree with the main characteristics of global circulation and climate. The robustness of these predictions, and the place of the constructal law as a selfstanding principle in thermodynamics, are discussed.

    Comment by Dan Hughes | December 3, 2007 | Reply

  2. I have looked at the Bejan paper (the first, as the second is subject to fee) .
    Actually even if it gives an order of magnitude of the dissipation , it was for me an opportunity to familiarize myself with the “constructal theory” that has been sofar only a vague concept for me .

    It looks rather puzzling and it would surely be of interest if you looked a bit at it Dan .
    Actually that theory pretends to be nothing less than a new natural law , unknown untill 1996 when the “constructal theory” was developped .
    It says that any non equilibrium system will organise (not a very clear word) itself in a manner that the access to the flows would be the easiest .
    Another wording might be that the system structures itself so that the energy dissipation be minimum .

    In any case it wants to establish a law relating a the geometry of the flows to some (dynamical) parameters so that given the physical characteristics of the system and the constraints , the system will establish a flow structure that minimises/maximises some function .

    In this form it bears a close resemblance to the least action principle and to the second principle of the thermodynamics even if it pretends to be more general than both .
    Mathematically , at least in the paper referred here , it is very primitive .
    It assumes that the general circulation is driven by the temperature difference between a polar area Ap at temperature Tp and a tropical area At at a temperature Tt .
    So you have the whole Earth thermodynamics defined by only 4 parameters .
    Then they establish a set of algebraic equations by using tons of coefficients , constants and empirical relations to finish with a relation containing only one variable x (=Ap/A) .
    Applying the “constructal principle” that means that the heat flows must be maximised that translates trivially in 2 partial derivatives = 0 .
    From that they obtain 2 relations more and find 2 solutions for x .
    As this partitions the Earth in 3 zones , they have found the Hadley cells , Ferrel cells and polar cells whose location matches rather well with observation . Many other parameters also match more or less observation .

    Now the puzzling part .
    The assumptions are trivially wrong – for example assuming that a huge area from the tropics to the poles radiates at its spatial average temperature is wrong .
    Assuming that the cloud distribution is isotropic and intervenes only through a unique albedo constant is wrong .
    Unless I missed it , there is no distinction between continents and oceans which is extremely wrong .
    All flows are defined by 4 parameters that imply averaging over huge , non homogenous areas with big temperature differences (here yearly averages are taken) .
    The list would go on .

    Yet by putting all this inconsistent stuff with unanalysed errors they obtain results that show correct orders of magnitude and compare favourably to observations .
    So the conclusion might be that , indeed , out of equilibrium systems display a very ROBUST behaviour at certains scales (here would comme a very difficult question about scale invariance of this “principle”) so that its most fundamental features resist even the biggest errors provided that only fundamental conservation laws are respected .
    If I had time , it would be interesting to see if the theory survives to a more physical Earth’s dynamics model by introducing f.ex 3 areas instead of 2 with a continents/oceans parametrisations and if it finds again the correct big scale flow distributions in 3 cells .

    On a particular note this “constructal theory” doesn’t seem to be very compatible with the chaos theory .
    After a fast research I didn’t find any paper dealing with the question if the “constructal theory” and the chaos theory mutually exclude each other or if they are complementary .
    That is also an interesting issue .

    Comment by Tom Vonk | December 11, 2007 | Reply

  3. Tom,

    Your summary is in agreement with my understanding of this application of constructal theory. Bejan and colleagues have been the very best at scale analysis and reducing physical problems to the essence of the important physical phenomena and processes. There is a Web sitehere devoted to the theory and its applications, and see this interesting new application. The title of the latter book reminds me of the time when turbulence modeling attempted to enter the field of crowd dynamics.

    While the assumptions made in the analysis of global climate seem to be far too coarse apparently the authors have retained the important controlling aspects. Of course this is not unusual in that many very successful models take the very same approach. We only have to get the important dominating phenomena and processes almost correct in order to get ‘engineering’ level results. Bejan has proven to be unequaled with regard to this aspect of modeling inherently complex physical situations; see his text books and papers on thermodynamics, heat transfer, entropy generation/minimization, and of course constructal theory.

    Also note that for many applications we are mostly interested in what might be called ‘solution functionals’. Characteristics and properties of the physical situations that are usually wrapped up into a single coefficient, many times a dimensionless expression, that measure a large-scale aspect of the situations. Friction factors, heat transfer coefficients, lift and drag coefficients, are examples. These solution functionals map all the numbers calculated by the model equations into a single expression. As such many of the nitty-gritty details do not have to be correct in order to get very close to the correct answer. This is one reason that the early years of 2-parameter turbulence modeling was so successful. The approach could calculate friction factors and heat transfer coefficients while at the same time the nitty-gritty, generally small-scale, parts were not correct. At one time I think the numerical values of the coefficients that go into 2-parameter turbulence models were considered to be universal. Now we know that in effect the correct answer for extremely limited turbulent flows were basically built into the early models. Also recall the use of a trigonometric function to represent the velocity distribution over in a flat plate boundary layer; clearly incorrect but will give a pretty good answer for the friction factor.

    A close-by solution functional is the ‘global average temperature’. This single number has so much stuff mapped into it that it is some kind of ‘global solution functional’, where ‘global’ does not refer to the Earth, or a meta-functional. It is clear that uncountable numbers of nitty-gritty details can be wrong and something close to observed values might be calculated. But that’s a whole other issue.

    I have not thought much about constructal theory and its compatibility with chaos. But constructal theory brings purpose to the somewhat related field of fractals. The purely mechanical-mathematical generation of fractal-like structures that appear in nature says nothing about the fundamental ‘how’ and ‘why’ behind the structures. They ‘look like’ natural structures and that’s about it. Constructal theory brings the ‘how’ and ‘why’ to the table. And that is not surprising as everything in nature has a purpose that is far more fundamentally basic and important that the mere contents of shape and structure. Finding those reasons seems to have been one driving forces behind development of constructal theory.

    One aspect of chaos and climate that I have yet to fully grasp is as follows. It is my understanding that ‘weather is chaotic but climate is not but a given trajectory in a climate calculation is’. I find that to be very confusing. It is very likely that I do not yet have the correct statement of the situation. However I do know that such hypotheses cannot be based on numbers generated by olde legacy computer programs that attempt to model and calculate inherently complex coupled multi-scale physical phenomena and processes in enormous and complex-geometrical spatial scales over equally large temporal ranges. Ain’t gonna happen.

    Comment by Dan Hughes | December 13, 2007 | Reply

  4. “Worldwide energy consumption by the human race is over 446 Quadrillion BTUs at the present time. This is equivalent to 131,400 TWhr or 471,000 PJ (= 10^15 J) per year. If we take an average efficiency to be 33%, the total energy conversion is about 3 times the consumption, or 1,413,000 PJ per year = 1.413 x 10^21 J/year.”

    The 446 quads are already in units of heat, so there is no need to multiply by three.

    Comment by MDA | March 15, 2008 | Reply

  5. MDA, the factor of 3 is to account for the efficiency (more correctly, the inefficiency) of the energy-conversion processes. It does not, and actually cannot, change the units. It’s a rough average value. Maybe I don’t understand your comment?

    Comment by Dan Hughes | March 15, 2008 | Reply

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: