New Computer Fund

Wednesday, March 18, 2015

How solvable is a problem?

If you happen upon this blog you will find a lot of post that don't resemble theoretical physics.  There is a theoretical basis for most of my posts but it isn't your "standard" physical approach.  There are hundreds of approaches that can be used, and you really never know what approach is best until you determine how solvable a problem is.

One of the first approaches I used with climate change was based on Kimoto's 2009 paper "On the confusion of Planck feedback parameters".  Kimoto used a derivation of the change in temperature with respect to "forcing" dT=4dF, which has some limits.  Since F is actual energy flux not forcing you have to consider types of energy flux that have less or more impact on temperature.  Less would be latent and convective "cooling" which is actual energy transfer to another layer of the problem and temperature well above or below "normal".   The 4dF implies a temperature which has exactly 4 Wm-2 change per degree.  Depending on your required accuracy, T needs to be in range so than the 4dF doesn't create more inaccuracy than you require.  So the problem is "solvable" only within a certain temperature/energy range than depends on your required accuracy.  If you have a larger range you need to adjust your uncertainty requirements or pick another method.

You can modify the simple derivation to dT=4(a*dF(1) + b*dF(2) +....ndF(n+1)) which is what Kimoto did to compare state of the science at the time, estimates of radiant, latent and sensible energy flux.  You can do that because energy is fungible, but you would always have an unknown uncertainty factor because while energy is fungible, the work that it does is not.  In a non-linear dissipative system, some of that work could be used to store energy that can reappear on a different time scale.  You need to determine the relevant time scales required to meet the accuracy you require so you can have some measure of confidence in your results.

Ironically, Kimoto's paper was criticized for making some of the same simplifying assumptions that are used by climate science to begin with.  The assumptions are only valid for a small range and you cannot be sure how small that range needs to be without determining relevant times scales.

In reality there are no constants in the Kimoto equation.  Each assumed constant is a function of the other assumed constants.  You have a pretty wicked partial differential equation.  With three or more variables it becomes a version of the n-body problem which should have Nobel Prize attached to the solution.  I have absolutely no fantasies about solving such a problem, so I took the how solvable is it approach.


The zeroth law of thermodynamic and the definition of climate sensitivity come into conflict when you try to do that.  The range of temperatures in the lower atmosphere is so large in comparison to the dT=4dF you automatically have +/- 0.35 C of irreducible uncertainty.  That means you can have a super accurate "surface" temperature but the energy associated with that temperature can vary by more than one Wm-2.  If you use Sea Surface Temperature which has a small range you can reduce that uncertainty but you have 30% of the Earth not being considered, resulting in about the same uncertainty margin.  If you would like to check this pick some random temperatures in a range from -80C to 50 C and convert them to temperature using the Stefan-Boltzmann Law.   Then average both and reconvert to compare.  Since -80C has a S-B energy of 79 Wm-2 versus 618 Wm-2 for 50 C neglecting any latent energy, you can have a large error.  In fact the very basic greenhouse gas effect is based on 15 C (~390 Wm-2) surface temperature versus 240 Wm-2 (~-18 C) effective outgoing radiant energy along with the assumption there is no significant error in this apples to pears comparison.  That by itself has roughly a +/- 2C and 10 Wm-2 uncertainty on its own.  That in no way implies there is no greenhouse effect, just most of the simple explanations do little to highlight the actual complexity.

Determining that the problem likely cannot be solved to better than +/-0.35C of accuracy using these methods is very valid theoretical physics and should have been a priority from the beginning.


If you look at the TOA imbalance you will see +/- 0.4 which is Wm-2 and due to the zeroth law issue that could just as easily be in C as well.  The surface imbalance uncertainty is larger +/- 17 Wm-2, but that is mainly due to poor approaches than physical limits.  The actual physical uncertainty should be closer to +/- 8 Wm-2 which is due to the range of water vapor phase change temperatures.  Lower cloud bases with more cloud condensation nuclei can have a lower freezing point.  Changing salinity changes freezing points.  When you consider both you have about +/- 8 Wm-2 of "normal" range.

Since that +/- 8 Wm-2 is "global", you can consider combined surface flux, 396 radiant, 98 latent and 30 sensible which total 524 Wm-2 which is about half of the incident solar energy available.  I used my own estimate of latent and sensible based on Chou et al 2004 btw.  If there had not been gross underestimations in the past, the Stephens et al. budget would reflect that.  This is a part of the "scientific" inertia problem.  Old estimates don't go gracefully into the scientific good night.

On the relevant time scale you have solar to consider.  A very small reduction is solar TSI of about 1 Wm-2 for a long period of time can result in an imbalance of 0.25 to 0.5 Wm-2 depending on how you approach the problem.  With an ocean approach, which has a long lag time, the imbalance would be closer to 0.5 Wm-2 and with an atmospheric approach with little lag it would be closer to 0.25 Wm-2.  In either case that is a significant portion of the 0.6 +/- 0.4 Wm-2 isn't it?

Ein=Eout is perfectly valid as a approximation even in a non-equilibrium system provided you have a reasonable time scale and some inkling of realistic uncertainty in mind.  That time scale could be 20,000 years which makes a couple hundred years of observation a bit lacking.  If you use Paleo to extend you observations you run into the same +/-0.35 minimum uncertainty and if you use mainly land based proxies you can reach that +/- 8 Wm-2 uncertainty because trees benefit from the latent heat loss in the form of precipitation.  Let's fact it, periods of prolonged drought do tend to be warmer.  Paleo though has its own cadre of over simplifiers.  When you combine paleo reconstructions from areas that have a large range of temperatures the zeroth law still has to be considered.  For this reason paleo reconstructions of ocean temperatures where there is less variation in temperature would tend to have an advantage, but most of the "unprecedented" reconstructions involve high latitude, higher altitude regions with the greatest thermal noise and represent the smallest areas of the surface.  Tropical reconstructions that represent the majority of the energy and at least half of the surface area of the Earth paint an entirely different story.  Obviously, on a planet with glacial and interglacial periods the inter-glacial would be warmer and if the general trend in glacial extent is downward, there would be warming.  The question though is how much warming and how much energy is required for that warming.

If this weren't a global climate problem, you could control conditions to reduce uncertainty and do some amazing stuff, like ultra high scale integrated circuits. With a planet though you will most likely have a larger than you like uncertainty range and you have to be smart enough to accept that.  Then you can nibble away at some of the edges with combinations of different methods which have different causes of uncertainty.  Lots of simple models can be more productive than one complex model if they use different frames of reference.

One model so simple it hurts is "average" ocean energy versus "estimated" Downwelling Long Wave Radiation (DWLR).  The approximate average effective energy of the oceans is 334.5 Wm-2 at 4 C degrees and the average estimate DWLR is about 334.5 Wm-2.  If the oceans are sea ice free, the "global" impact of the average ocean energy is 0.71*334.5=237.5 Wm-2 or roughly the value of the effective radiant layer of the atmosphere.  There is a reason for the 4 C to be stable thanks to the maximum density temperature of fresh water of 4 C degrees.  Adding salt varies the depth of that 4 C temperature layer, but not is value and that layer tends to regulate average energy on much longer time scales since the majority of the oceans are below the 4 C layer.  Sea ice extent varies and the depth of the 4 C layer changes, so there is a range of values you can expect, but 4 C provides a simple, reliable frame of reference.   Based on this reference a 3.7 Wm-2 increase in DWLR should result in a 3.7 Wm-2 increase in the  "average" energy of the oceans, which is about 0.7 C of temperature increase, "all things remaining equal".

Perhaps that is too simple or elegant to be considered theoretical physics?  Don't know, but most of the problem is setting up the problem so it can be solved to some useful uncertainty interval.  Using just the "all things remaining equal" estimates you have a range of 0.7 to 1.2 C per 3.7 Wm-2 increase in atmospheric resistance to heat loss.  The unequal part is water vapor response which based on more recent and hopefully more accurate estimates is close to the limit of positive feedback and in the upper end of its regulating feedback range.  This should make higher than 2.5 C "equilibrium" very unlikely and reduce the likely range to 1.2 to 2.0 C per 3..7 Wm-2 of "forcing".  Energy model estimates are converging on this lower range and they still don't consider longer time frames required for recovery from prolonged solar or volcanic "forcing".

If this were a "normal" problem it would be fun trying various methods to nibble at the uncertainty margins, but this is a "post-normal" as in abnormal problem.  There is a great deal of fearful over confidence involved that has turned to advocacy.  I have never been one to follow the panic stricken as it is generally the cooler heads that win the day, but I must be an exception.  We live in a glass half empty society that tends to focus on the negatives instead of appreciating the positives.  When the glass half empties "solve" a problem that has never been properly posed, you end up were we are today.  If Climate Change is worse than they thought, there is nothing we can do about it.  If Climate Change is not as bad as they thought, then there are rational steps that should be taken.  The panic stricken are typically not rational.










No comments:

Post a Comment