BIG NEWS Part VII — Hindcasting with the Solar Model

The Solar Series: I Background   |  II: The notch filter  |  III: The delay  |  IV: A new solar force?  |  V: Modeling the escaping heat.  |  VI: The solar climate model   |  VII — Hindcasting (You are here)   | VIII — Predictions

All models are wrong, some are useful. That’s how all modelers speak (except perhaps some climate scientists).

The barriers to making a good climate model are many. The data is short, noisy, adjusted, and many factors are simultaneously at work, some not well described yet. Climate modeling is in its infancy, yet billions of dollars rests on the assumption that CO2 will cause catastrophic warming and the evidence that most recent warming was due to CO2 comes entirely out of models. It’s important to focus on the pea:

“No climate model that has used natural forcing only has reproduced the observed global mean warming trend” (IPCC 2007)

It is a crucial plank that modelers say “we can’t explain the current warming without CO2”. Current climate models assume that changes in solar radiation have a small immediate effect and solar magnetic fields have no significant effect on Earth’s temperature. They do not consider the possibility that a solar effect may occur with an 11 year delay (equivalent to one solar cycle), despite the independent studies that suggest this. These GCM models cannot use CO2 to predict modern temperatures without amplifying feedbacks, for which evidence is sparse or even contradictory. They don’t predict the pause, or the upper tropospheric temperatures, or the Medieval Warm Period.

The total climate model described below can reproduce graphs based on a CO2 model, such as one used by GISS, but it can also produce graphs using the solar model developed in these posts, or a mix of both CO2 and solar. (This is the point where the solar assumption is dropped and tested.) The point here is simply to see if there is a viable alternative model to the CO2 model. It appears there is, which is not to say it’s finished, or can’t be improved, or cannot be presented better, or tweaked.  At this stage it’s crude, but it exists.

There are 23 well funded ambitious global climate models that have been developed by international teams over the last 30 years, and a huge effort has been made by PR teams to make those models look good. The model below is one person’s work over 18 months with the aim of asking only, “is this possible” and “what can we learn?” The results are displayed with bare honesty and many caveats about how much (or how little) can be read out of them.

No model, much less one whose predictions have not been tested, is proof of any hypothesis. But they are sometimes good tools to tell us where to look. The notch-delay solar model is a viable alternative to the current CO2 models. It matches most major turning points of temperature (something CO2 models have struggled to do), and is used here back to 1770 — 100 years earlier than most. There’s a definite weak period with the 1950-1980 era where the atmospheric bomb test line resolves to have an improbably large effect. You might think the idea that nuclear tests cooled the planet in the 60’s and 70s is ridiculous. I certainly did. It’s something fans of CO2-theories have used to explain the cooling that Co2 based models can’t explain. Does it have legs? Hard to say, and worthy of a post on its own. But before you write it off, see  John Daly’s site which has an interesting discussion page that compares bombs to Pinatubo eruptions, and points out atomic bomb testing went on in the atmosphere, despite the 1963 test ban treaty, until 1980 (thanks to the Chinese and French).  Nuclear bombs contribute aerosol dust, but moreso, it’s radioactive too (a bit of a cosmic ray effect?). You might think it will rain out quickly, but bombs of 1Megaton reach up to the stratosphere above the clouds that rain. All up, 504 atmospheric nuclear explosions occurred between 1945 and 1980. A total of 440 MT were detonated (Fujii). It hangs around, C14 Radiocarbon levels in the atmosphere peaked in 1963 but the isotope stayed above natural levels for years – into the mid 1980s. (Edwards 2012). Fujii (2011) suggests atmospheric tests caused the “global stagnation“of that era and say it should be included in GCM’s. Maybe it isn’t as mad as it sounds?

The model has no aerosol component, which may or may not offset the cooling theoretically attributed to atmospheric bombs, nor does it have the Pacific Decadal Oscillation or Lunar cycles. The anomaly may be resolved if the model is expanded, or maybe it just means the delayed force from the sun is not the major driver — as we’ll explain in the next post, we’ll probably all have a good idea in a few years.

Solar TSI appears to be a leading indicator for some other (probably solar) effect, that we are calling “force X” for now. If that factor, quantified by TSI, was fed into current climate models, then those models would work with less forcing from CO2. Perhaps they would have produced better long term graphs, as well as fitting the recent pause and not requiring such a pronounced tropospheric hotspot. It might solve a lot of problems at once. Presumably projections of catastrophe would be subdued.

Lastly, compounding the many hindcast inaccuracies is the problem of inexplicable adjustments to temperatures (every skeptic will be wondering). It’s possible a model trained on raw temperatures curves (or older published datasets) may produce quite a different fit (which might be better or worse). For instance, if the land thermometer data from 1850 to 1978 exaggerates the general temperature rise then the solar model will be too sensitive — because it trained (or computed its parameters) on this data and “thinks” the TSI changes caused that amount of temperature change. Ultimately we won’t know for a few years whether it is right. (Bring on those independent audits please!)

The theory of the “delay” will be tested soon. It is falsifiable. We’re putting it out there for discussion. We have issued no press release, we aren’t selling a product (we’ll give it all away soon), nor do we demand your tax money. Judge it accordingly.

The bottom line is that modern climate models do not include any delayed force from the Sun.  Saying that models don’t work without CO2, and no natural factors can explain the modern warming, is and always was, a fallacy known as argument from ignorance.  — Jo

Hindcasting

Dr David Evans, 24 June 2014, David Evans’ Notch-Delay Solar Theory and Model Home

In the previous posts we built the notch-delay solar model. Now we are going to test it.

The solar model is given the TSI record from 1749 (the start of monthly sunspot records), and it computes the corresponding temperature in each month from 1770 from just the TSI data for the current and previous months. Then we compare this “hindcast” with the measured temperatures. We also test the CO2 model to compare how it performs, and we test a mix of the CO2 and solar models to show that they play together well.

Finally, we look at the significance (or not) of the solar model so far.

1 Our total climate model

The total climate model* includes the notch-delay solar model, a standard CO2 model (two-compartments, with transient and equilibrium responses, computing temperature changes from the observed CO2 levels), a CFCs model (based on Lu 2013), and an atmospheric nuclear bomb tests model (based on the megatons exploded in the atmosphere, from UN reports). It can also apply all the forcings from the GISS model E, a mainstream climate model that released its forcings publicly in 2011—notably volcanoes, black carbon, snow albedo, and land use.

All these models can be switched on or off in any pattern within our total climate model. The total climate model has an optimizer to fit the model’s temperature output to measured temperatures, thus finding a set of optimal parameters. 

We use composite TSI and temperature records for the measured TSI and temperature, as in the previous post. The composite temperature was put together from the main temperature datasets, instrumental back to 1880 then increasingly reliant on proxies, mainly the huge proxy study by Christiansen and Ljungqvist in 2012. Similarly the composite TSI record was constructed out of the main TSI datasets, using measured data where possible.

2 What if CO2 was the main driver?

To show how our “total climate model” works, let’s first fit a CO2 model to the observed temperatures, assuming there is no delayed link between TSI and temperatures (that is, the mainstream assumption).

Let’s run the CO2 model with solar input as per the GISS model (that is, the immediate, direct effect of changes in TSI), with the volcanoes, black carbon, snow albedo and land use also from GISS, and the CFCs. The CO2 model was fitted to the measured temperatures and found to have an equilibrium climate sensitivity (ECS) of 3.4°C, agreeing with the IPCC’s central estimate of 3.3°C. The carbon dioxide theory fits the measured temperatures since 1800 fairly well in a very smoothed sense:

 

Figure 1: Total climate model without the solar model. It includes immediate warming due to changes in TSI as per the mainstream “GISS Model E” climate model. Thus, most of the warming must come from carbon dioxide. The estimated equilibrium climate sensitivity is 3.4°C, close to the central estimate of 3.3°C by the IPCC

The CO2 model produces a smooth increase in temperature, echoing the smoothly increasing CO2 concentration. Carbon dioxide by itself cannot begin to explain the jiggles in temperature on time scales from one to 10 years, so the carbon dioxide theory calls these jiggles “natural variability”—essentially meaning the bits they cannot explain.

3 What if solar effects were the main driver?

Now let’s run the notch-delay solar model, without any contribution from CO2 or CFCs. In other words, we are running the solar model under the solar assumption, that the recent global warming was associated almost entirely with solar radiation and had no dependence on CO2. As explained at the start of these posts, we set out to build a solar model that could account for the recent global warming under that assumption.

So, we are now testing the proposition that the recent global warming could have been mainly associated with TSI rather than CO2.

There is monthly TSI data from 1749, when the SIDC monthly sunspot records start—they are a decent proxy for TSI, and along with Lean’s yearly reconstruction of TSI from sunspots are the only components of the composite TSI from 1749 to 1882. The step response of the notch-delay solar model takes about 15+ years to fully respond, so the model takes 20 years or so to spin up, and we begin the simulation in 1770. (During the Maunder minimum, from about 1660 to 1705, there were almost no sunspots, so the solar model has no way of estimating force X. Thus it cannot really be expected to work before about 1720 at about the earliest.)

Each monthly temperature computed by the solar model is computed only from the TSI data for previous months and the current month. This is the only data input to the solar model. We then add temperature changes due to volcanoes and so on from the other models, to form the temperature record computed by the total climate model.

The solar model computes the temperature for a given month by adding together all the step responses of the TSI steps of the previous months (that is, by convolution). The change in TSI from one month to the next is a “step” in TSI, and the temperature response to that step is as shown in the step response of the solar model in Figure 4 of Post VI, appropriately scaled by the size of the monthly TSI step. Yes this method is a little slow and there are faster methods, but this way makes it clear that we are using the step response and previous TSI data only—and anyway computers are faster these days, and the data series here have only a few thousand points.

We previously found the parameters of the solar model by fitting the model’s computed temperatures to the observed temperatures (and simultaneously fitting the model’s transfer function to the empirical transfer function). Therefore, so long as the temperature record computed by the solar model basically has the right shape, then of course it is going to fit the measured temperatures reasonably well. The question of how well it does is mainly going to depend on whether the model predicts the right shape of temperature curve, such as getting the turning points about right, because the fitting is going to ensure that the computed temperatures match the measured temperatures in a general sense.

The notch-delay solar model fits the measured temperatures reasonably well:

Figure 2a: 1770 – 2013 Total climate model when driven only by solar radiation, with no warming due to carbon dioxide. The solar model output is not explicitly shown here because having three lines close together (solar model, climate model, and observed temperatures) is too confusing, but it can be inferred by subtracting the other constituent models from the total climate model.

 

Figure 2b: 1900 – 2013:  As for fig 2a, but for the last century.

 

The major temperature trends are all reconstructed, with major turning points about right, and the sizes of the reconstructed changes are roughly as observed. Therefore the notch-delay solar model could provide an entirely solar explanation for recent global warming, without any significant warming due to rising CO2 or CFC levels.

The solar model reproduces a lot of jiggles, but gets the timing of them wrong as often as not, especially further back in time. This might simply be due to the fairly uncertain nature of the TSI data, which is reconstructed from sunspot numbers. Sunspot numbers themselves are uncertain because standards of what counted as a sunspot have varied over the years. And, as indicated by the physical interpretation of the delay in Post IV, the delay presumably is not constant but instead it is probably the length of the prevailing sunspot cycle, which averages 11 years but varies from 8 to 14 years. The solar model here is using a constant delay of 11 years. It doesn’t take much timing error to put an up-jiggle where there should be a down-jiggle. So there is some hope that, with better solar radiation data in future from satellites and a more complicated model with variable delay (the subject of future research perhaps, if there is sufficient interest), the solar model could explain some portion of “natural variability”.

Over the period of better TSI data from 1610, the TSI was clearly at a maximum from about 1950 to 2000. However the temperature kept increasing during this period, even though TSI plateaued. The delay in the solar model is 11 years, which pushes back that plateau from 1960 to 2010, but that is not enough to explain why the total climate model reconstructs rising temperatures throughout this period when it is based on the solar model and omits the CO2 and CFC models. Here the output of the solar model is explicitly shown:

 

Figure 3: The solar model from 1900 as in Figure 2, but with the solar model output explicitly shown (in pink). From the 1950s through the 1990s (but mainly the 1960s), the solar model alone computes temperatures significantly warmer than actually occurred. In the total climate model this is counteracted by global cooling due to the atmospheric nuclear bomb tests, which put fine reflective dust into the atmosphere and apparently caused a mini-nuclear winter.

 

The answer found by curve fitting the total climate model to the observed temperatures is that global cooling caused by the atmospheric nuclear bomb tests may have counteracted the warming associated with the stronger TSI. This initially came as a great surprise to us, because the nuclear data had only been added as a bit of a joke and for completeness, but after a bit of research it started to look kind of plausible. The tests, conducted from 1945 to 1980 but mainly before 1963, put up fine dust that stayed high up in the atmosphere for years, reflecting sunlight back into space and lowering the incoming radiation [Fujii, 2011], and also dropping down radioactive nuclei that might seed clouds. Because the nuclear dust is in the stratosphere, there is no rain to wash it out. The required cooling from the tests is about 0.5°C at its peak in 1963, the year that the USA and the USSR agreed to discontinue atmospheric testing. (If the solar model is too sensitive because the warming of the  land thermometer records is exaggerated, then less cooling is required.)

While this is only an answer found by numerically piecing together the test yield data with the output of the solar model and the observed temperatures, it fits. Maybe the nuclear winter hypothesis is partly correct. We feel it is likely to overestimate the effect.

Alternative causes for a cooling influence during the 1950s to 1990s could be pollutant aerosols and/or whatever caused global dimming, or even the Pacific Decadal Oscillation (PDO). With no data that quantifies their effects, the total climate model only had the nuclear bomb yield data to work with, but it is remarkable that the piece that fits the puzzle quite well is the atmospheric nuclear bomb test data.

4 Mix of CO2 and solar

There are now two solutions to the climate question:

  • If we assume global warming is mainly due to CO2 then we get the CO2 theory, and it fits the measured temperatures from 1800 (though not before).
  • If we assume that global warming is mainly associated with changes in TSI then we get the notch-delay solar model, which also fits the measured temperatures from 1800.

Obviously both assumptions cannot be true, but it may be that the true solution is a mix of both models—such as 40% of one model and 60% of the other. If both solutions fit the measured temperatures on their own, then any linear mix will also fit the data. Here is an example:

 

Figure 4: Total climate model when driven by a mix of solar radiation and carbon dioxide. The temperature changes computed by the solar model were multiplied by the solar factor of 70%, then the CO2 and other models were fitted. This mix was arbitrarily selected for illustration; do not read any significance into it.

 

This illustrates that the CO2 and solar models play together nicely. Assuming the climate system is linear for the small perturbations of the last few hundred years, the two solutions can operate almost independently and their temperature changes add (that is, they superpose).

If the optimizer is given both the CO2 and solar models to work with, it finds a solution that is mainly the CO2 solution and only a little of the solar solution. However this is only because the jiggles in the solar solution are wrong as often as not (Figure 2), which the optimizer finds worse than simply ignoring the jiggles and getting them right on average (Figure 1). So there doesn’t appear to be any significance in this, and we will have to find other means of determining the true balance between the CO2 and solar solutions.

5 Significance of the solar model

We have developed a solar model that accounts for the recent global warming, if that warming was almost entirely associated with solar radiation and had no dependence on carbon dioxide.

This is a viable solution to global warming, because:

  • It’s quantifiable, with a model that approximately hindcasts the observed temperatures. It is not just a concept with handwaving, or a rough one-off computation.
  • It’s got physical interpretations for all the parts. This is a physical model, not just curve fitting or an unexplained correlation.

In short, we have demonstrated that the global warming of the last two centuries could have been mainly associated with TSI rather than CO2. This overcomes one of the bedrock beliefs of anthropogenic global warming, namely that the recent global warming could not plausibly be due to anything other than CO2.

The most important element of the solar model is the delay, which is most likely 11 years (but definitely between 10 and 20 years). The delay was found here as a necessary consequence of the observed notch, but it has been independently corroborated to varying degrees several times over the last decade, apparently without its significance being noticed.

A major objection to substantial solar influence is the finding of Lockwood & Froehlich in 2007, who showed that four solar indicators including TSI peaked in about 1986 then declined slightly. However temperature continued rising for several years after 1986. This has been widely interpreted to mean the recent warming cannot have been due to the Sun. However, the delay can explain this: 1986 + 11 = 1997, about when global warming ended. Thus the delay overcomes another of the bedrock beliefs of anthropogenic global warming.

Conversely, without the delay, the objection of Lockwood and Froehlich appears solid and it is hard to see how a substantial solar influence is possible.

The weakest points of the notch-delay solar theory are:

  • The assumption of sufficient linearity of the climate system,
  • The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s.
  • The inability to precisely identify force X (see Post IV).

Some may challenge the discovery of the notch, but the notch implies a delay and the delay receives support from several independent findings.

What we have not shown so far in these posts is that the notch-delay solar model is true, or to what extent it is true. There is nothing in the posts so far to support the assumption that the recent global warming was almost entirely or even partly associated with solar radiation. On the material presented so far, the CO2 and solar solutions are both viable and no reasons have been given to suppose that either one is more influential.

The notch-delay theory provides a second, alternative solution to the climate problem, with a physical model and a plausible interpretation. No longer is climate a “one horse race”, where you are limited to either supporting the CO2 theory or focusing on its deficiencies. We are now in a “two horse race” (though one horse is very new to the world and not fully introduced or fleshed out yet).

Regular readers of this blog are well aware that the CO2 solution has a lot of problems. Soon we will be turning to the second part of this series, where we will look at reasons for believing that the solar model is dominant and the CO2 solution is only a small part of the overall solution.

In the next post on this topic, we will use the notch-delay solar model for forecasting. This is where it gets interesting.

Notch-delay solar project home page, including links to all the articles on this blog, with summaries.

 

* Our climate model is in a spreadsheet that we will be releasing shortly. We chose to do all the work for this project, right from the beginning, in a single Microsoft Excel spreadsheet for pc. It’s not the fanciest or the fastest, but an Excel spreadsheet is the most ubiquitous and one of the friendlier programming environments. It runs on most computers (any pc with Excel 2007 or later, maybe on Macs with Excel 2011 or later), can hold all the data, makes nice graphs, and all in a single file. The models use VBA code, a form of the basic programming language that is part of Microsoft Office. The spreadsheet is professionally presented, and you press buttons on the sheets to make models run and so on. You can inspect and run or step through the code; it will be all totally open. Thank you for your patience, but giving away the spreadsheet early would preempt the blog posts and disrupt a focused discussion.

 

References

IPCC, Assessment Report 4, 2007, Working Group 1 Understanding and Attributing Climate Change, Chapter 9. Executive Summary. [IPCC site] Page 665

Edwards (2012)  Entangled histories: Climate science and nuclear weapons research, The Bulletin of Atomic Scientists,

Fujii, Y. (2011). The role of atmospheric nuclear explosions on the stagnation of global warming in the mid 20th century. Journal of Atmospheric and Solar-Terrestrial Physics, Volume 73, Issues 5-6, April 2011, Pages 643-652. [PDF]

Lockwood, M., & Froehlich, C. (2007). Recent oppositely directed trends in solar climate forcings and the global mean surface air temperature. Proceedings of the Royal Society, 10.1098/rspa2007.1880.

Lockwood, M., & Froehlich, C. (2008). Different reconstructions of the total solar irradiance variation and dependence on response time scale. Proceedings of the Royal Society, 464, 1367-1385.

Lu, Q. (2013). Cosmic-Ray-Driven Reaction and Greenhouse Effect of Halogenated Molecules: Culprits for Atmospheric Ozone Depletion and Global Climate Change. International Journal of Modern Physics B, 27.

Pinker, R. T., Zhang, B., & Dutton, E. G. (2005). Do Satellites Detect Trends in Surface Solar Radiation? Science, Vol. 38, 6 May 2005, 850 – 854.

8.9 out of 10 based on 88 ratings

400 comments to BIG NEWS Part VII — Hindcasting with the Solar Model

  • #
    bobl

    Wow, I mean wow. When I look at this, just one thought strikes me, The solar model presented here is based on the measured dynamic response of temperature to TSI, it aught to be correct, but year to year it over estimates temperature in recent times. To me that is signalling that solar influences are more than enough to explain recent warming and that if anything mankind has in the last 50 years COOLED the atmosphere. Does anyone else get that implication, what am I missing?

    221

    • #
      David Evans

      I suspect the answer is more mundane bobl.

      The solar model parameters were found by fitting to measured temperatures, mainly in the period of land thermometer data from 1850 to 1978. The solar model trained on this data. If those temperature records exaggerate the temperature rise then the solar model will associate the TSI changes of that period with exaggerated temperature changes — so it will be too sensitive.

      Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.

      190

      • #
        Matty

        If those temperature records exaggerate the temperature rise then the solar model will associate the TSI changes of that period with exaggerated temperature changes — so it will be too sensitive.

        That’s brilliant if it can predict the tampered temperature record too.

        161

        • #
          bobl

          David, if we were to presume the oversensitivity comes from adjustment bias, can you get from the model, how much bias there is in the temperature reecord Vs observation, eg divide modelled by observed, or maybe (modelled – observed)/ observed, what happens if you then compensate the model for that?

          Is it worth doing the transfer function of modelled Vs observed to look at the model fit? At first thought this should probably be flat, but it doesn’t seem like it is to me? Hard to know in the time domain though.

          Thirdly, your model, if I understand it right models force X as X x (TSI shifted 11 years), it would be interesting to use other cyclic phenomena directly for force X, 1/UV or -UV, Magnetic flux, 10 cm radio flux, solar wind density, and see whether those fit the wiggles better.

          40

          • #
            Eddie

            Ha, ha. Measuring the tamperature. That would be something.

            30

          • #
            cohenite

            Actually Bobl [and matty and eddie], that is a very good point; if the model does verify temperature cause then it should be able to measure the adjustment bias by the AGW science.

            51

          • #
            David Evans

            If
            1. the notch-delay model or a successor improved model could predict accurately (which might take decades to establish),
            2. the model took into account CO2 accurately
            3. TSI records for 1850 – 1978 were known with a great deal of confidence
            then we could hindcast the temperatures for 1850 – 1978 with a great deal of confidence. But it is going to be long time if ever before that is possible.

            The aim here and now is to use mainstream datasets at face value to see if natural influences can explain the recent global warming.

            90

      • #
        bobl

        Yes, Ok, That’s logical – still it’s the first thing that struck me about it, I didn’t expect the observed temperature to be BELOW the model output largely, I think, because I expected CO2 would have some effect, that koolaid is pretty powerful, even a whiff is fatal to perceptions.

        Since the model was trained on CO2 contaminated temperature over a long period, I expected it to average out and predict temperature rise at a rate slightly lower than observations post 1950, witout CO2 factored in. Whatever the cause, it’s a surprise anyway.

        50

        • #
          the Griss

          Would be interesting if David could run an example with CO2 set as having a MINOR COOLING EFFECT !! 🙂

          91

      • #

        But David, what if you had trained your solar model on Svalgaard’s more uniform estimates of TSI (for 1850 – 1978)? Have you looked at this to check the consequences frequency-wise?

        http://bartonpaullevenson.com/SvalgaardTSI.html

        40

      • #
        steven mosher

        “Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.”

        Satellite sensors do not measure surface temperature.

        Satellite sensors measure brightness. The brightness data is then used in conjuction with radiative transfer models to infer the temperature.

        A) surface temperature ( or LST ) is the assumed temperature of the actual land. This estimate is highly reliant on other estimations and upon classification of land types.
        The estimates are typically good to with +- 1C
        B) air temps at various pressure levels can also be inferred. UAH and RSS for example do not estimate the temperature at the surface. They estimate miles above the surface.
        The only satellite product that makes an attempt to estimate 2m air temps is AIRS.
        The product is a result of interpolation.

        A GCM can of course give you all of these temperatures
        LST, SST, air temps at 2meters, 1000hpa, 800, 700, 600, etc

        When you can predict say the temp at the stratosphere then you will have a way to actually
        test your model with out of sample data. That is, we always look to test a model with data
        that was not used in the construction.. not only temporally out of sample, but spatially out of sample.

        Like so: you trained on 2m air temps, a gcm trained on 2m air temps. The critical test is How well do you do above 100 mb?

        00

        • #

          steven mosher June 27, 2014 at 5:32 am

          (“Now we are in the satellite era of measuring surface temperature, so presumably we are getting temperatures right. But the solar model will hindcast exaggerated temperature changes because it is too sensitive.””

          “Satellite sensors do not measure surface temperature.”

          Indeed fat Mosher. All of the instruments measure radiative flux in one direction or the other in a narrow frequency band. The measured flux is pristine, and indicates only the thermal radiative flux (power transfer)
          between one temperature surface and a lower temperature surface. That narrow band one way flux gives no indication of the temperature of any/either surface..

          00

    • #
      OriginalSteve

      Of interest?

      http://www.telegraph.co.uk/earth/environment/10916086/The-scandal-of-fiddled-global-warming-data.html

      Have the climate scientists been thrown under the bus?

      30

  • #
    King Geo

    So we now have two Climate Models to consider thanks to the fine work by Dr David Evans.

    1. the new “Solar (delay) Model” which is quantifiable and

    2. the “CO2 Model” which is based on computer modelling (mainly based on GIGO criteria).

    I think the so called 97% of scientists who have backed Model 2 are on a dead set loser.

    I know very few geos who have faith in Model 2. I am sure once they have examined Model 1 they will be won over. It makes so much sense explaining Earth’s Climate today, during the current Holocene Interglacial (11.7Ka – present) and the pre Holocene (pre 11.7Ka) geological record. The anthropogenic effect on Earth’s Climate is negligible (via CO2 emissions) although this is not the case in “localised land areas” where removal of large amounts of forest has definitely had a significant impact on local climate e.g. tropical rainforest regions in some lower latitude countries.

    191

    • #
      David Evans

      Thanks! Bear in mind though that all climate models are rubbish, only our notch-delay solar model hasn’t been disproved yet (unlike the CO2 model).

      Falsification criterion in the next post 🙂

      362

  • #
    tolou

    The SIDC sunspot number are about 20% too high after 1945(Waldmeier), so that might be why you get a little high prediction from around that time. You may want to reduce SSN to compensate this step change.

    70

  • #
    crakar24

    Well thats interesting, two entirely different models that produce the exact same outcome and they both cant be right, so much for the 97%. Thanks for the detailed information David.

    Cheers

    120

    • #
      Rud Istvan

      But as David specifically pointed out, they can both be partly right. Unfortunately, he also pointed out that all proportionate combinations are ‘right’ absent some exogenous ‘decider’. I make a research proposal on that. A variety of ‘exogenous’ methods from energy balance to Bayesian inference suggest effective CO2 sensitivity is on the order of 1.5 to 1.9. Pick 1.8, Guy Callendars 1938 estimate, for the sake of example. Then what proportions are indicated? An issue remains that both models were trained on temperature data incorporating natural variability. Still, possible progress.

      20

      • #
        David Evans

        This series of blog posts is far from finished 🙂

        70

        • #
          crakar24

          David,

          From what i have read we still dont know the mechanism “X Factor” (or you have not divulged what you think it is as yet), surely if we could pinpoint the mechanism that controls the temps in your solar model then we should be able to eliminate or at least reduce the affects of other alternatives?

          For example if we can deduce a lowering of UV coupled with increased GCR’s in conjunction with reduction in magnetic fields and who knows what else all work together to produce a raise or fall in X.X degrees C then we can then determine the affects of other alternatives like co2 and maybe aerosols etc.

          Any thoughts?

          Regards

          Crakar

          10

  • #
    Matty

    The spreadsheet is professionally presented, and you press buttons on the sheets to make models run

    Whoopee. So easy perhaps even Professor P.J. will be able to operate it 🙂

    40

  • #
    Richard Case

    I can’t wait to see the solar-based forecasted global temperatures for the next 5, 10, and 20 years. Hopefully, they diverge significantly (go down in temperature) from the CO2 models in the near future, so that if actuals do indeed follow the solar-based forecasts and similarly go downward, then it should be largely “game, set, and match”.

    What I do worry about though, is the future manipulation of the global temperature data. The AGW crowd are the owners of this data, and I do worry about their motivations to preserve the CO2-based dogma. How will we know if the global temperature data is accurate in future years? Who is watching the watchers? Steven Goddard has been very vocal about this matter, and based on the Climategate emails, I wouldn’t put anything past these folks.

    180

  • #
    A C Osborn

    Sorry, I don’t see how you can hindcast anything to the mangled, manufactured temperature records which do not even record the warmth of the 30, especially in the USA, can I suggest that you try sensitizing the model and then hindcasting to the Raw data as used by Steve Goddard?

    60

    • #

      A C Osborn, you are not the only one thinking that. Obviously adjustments to the data will impact the model performance.

      Goddard is working with just the US data though. We need the raw global dataset. But they lost it.

      180

      • #
        bobl

        It would be possible to construct one, even on your blog, by getting a volunteer in each country to obtain raw data, and building a database out of it. I would be ok with building the database for it, Aussie raw data is easy enough. With a public repository, lots of citizen science could be done. Groupsourcing would make pretty short work of it I’d imagine. Some datasets we’d have to pay for I’d imagine, so maybe we need to get the Koch’s (Hi David, hi Charles) to cough up that little bit more. Also, we could do science on the hourly data rather than the min-max junk they currently use (min – max)/2 is not the mean daily temperature.

        I’d really like to see Davids model run on raw data for example.

        PS, for the conspiracy theorists out there, I really don’t know David and Charles Koch, that bit was pure /sarc

        50

        • #

          The only one I know of, who has tried to work with raw global data is Frank Lansner:

          Messages from the global raw rural data. Warnings, gotcha’s and tree ring divergence explained!

          See also — The Urban Heat Island effect: Could Africa be more affected than the US?

          He studied it continent by continent and found inland towns were often roughly as hot in the 1940s as they are now, but coastal towns affected by ocean air flow had warmed in line with SST.

          This also begs the question of why it is so difficult to get a single proxy that goes from say 1500 to now. The proxies seem to stop 20 – 30 years ago. Where are the updates? (You’d think we had run out of trees/coral/clam shells?) I would like to use just one continuous data set.

          50

      • #
        Rereke Whakaaro

        Jo, You can use, “just the US data”, as a “representative sample” in a controlled test, in order to validate the model. The UN does that sort of cherry picking all of the time. What is good for the goose …

        It might also be a way to parameterize any “inconsistencies” introduced by “adjustments”.

        10

        • #
          Roy Hogue

          What is good for the goose…

          Do we want to simply be the gander?

          I admit I’m not sure of the right answer to that. But in Dr. Evans’ position I don’t think I would. If this is to be fairly tested then its performance shouldn’t be suspect because of suspicion of cherry picked data.

          00

          • #
            Rereke Whakaaro

            Fair comment.

            I was thinking of using that data for validation purposes, rather than publication, but I can see how my comment was poorly worded.

            00

      • #

        “Goddard is working with just the US data though. We need the raw global dataset. But they lost it.”
        Steve is working with a subset. If you have a reasonable Solar model, should it not work for any subset?
        No sense in training on homogenized pablum,
        and applying to a specific, as there is no global anything! But training for a single latitude, then evaluating each station at that latitude?
        Is that for the rest of us to do?

        00

      • #
        steven mosher

        the raw dataset is not lost.
        what jones lost were HIS COPIES of data supplied by a few NWS.

        the raw data is all there. you simply havent taken the time to look at it

        00

  • #
    Andrew

    At 1.14C ECS, you’re in danger of being classified as a warmy.

    10

  • #
    Mikky

    Am I right in thinking that a “rival” solar model is the one from Svensmark et al?
    Not sure about this, but I think they would expect the 11-year oscillations to be visible, via solar magnetic effects on cloud formation via cosmic rays.

    What is most impressive to me about David’s solar model is the major changes in trend around 1990, 1950 and earlier minima,
    with roughly correct changes in temperature.
    Those are ALMOST compelling correlations, that maybe don’t need precise matching in time, given the complexity of the climate system.
    It may be better to apply much more smoothing in display graphs, to avoid the major changes being masked by the noise.

    I have to disagree with “The delay was found here as a necessary consequence of the observed notch”.
    Don’t think a notch has been observed, and even if it had been it does not force a delay (apart from the one you get anyway with all filters).

    20

  • #
    windrunner

    Thank you, can’t wait for the next blog – this is like watching 24. 🙂

    120

  • #

    “The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s”

    No need for the nuclear winter hypothesis.

    A negative PDO combined with quieter cycle 20 caused a slight cooling in the 50’s, 60’s and early 70’s then came the positive PDO together with stronger cycles cycles 21 to 23 causing the late 70’s climate shift which did not start to fizzle out until around 2000.

    80

    • #
      David Evans

      Perhaps, but what causes PDO? If it is an entirely internal climate mechanism, it would be like using ENSO (which predicts temperature 6 months later really well, but is just another internal climate variable) — no real explanatory power. But PDO might be influenced by lunar forces or something, in which case it would be useful as a semi-exogenous driver. In any case I didn’t have PDO data so I didn’t include it in the total climate model, and I wanted to stick to a physical model rather than relying on unexplained correlations.

      80

      • #

        ENSO is probably an internal oscillation due to differential heating either side of the equator. The clouds of the ITCZ are north of the equator most of the time.

        PDO could also be internal but the relationship between El Nino and La Nina shifts every 30 years or so.

        The role of cloudiness changes would be to skew the balance between El Nino and La Nina within the PDO cycle.

        Less clouds and El Ninos get stronger relative to La Ninas and vice versa regardless of whether PDO is in a positive or negative phase.

        That’s how you get upward stepping of global temperature from one positive PDO phase to the next (LIA to date). You would get downward stepping from one negative phase to the next during a cooling period such as MWP to LIA.

        So, we can link net warming or net cooling from PDO to global albedo changes independently of internal system variability. If it was all internal system variability there would be no sequence of consecutive step changes up or down over centuries correlating with solar activity. It would be far more random from one phase to the next.

        No need to put it in your model at this stage. Just bear in mind that on my description your model is capable of accommodating the thermal effect of long term PDO variability just from albedo changes so your predictions should be much the same as mine now that we have agreed on the correct sign of the cloudiness response from force x.

        80

        • #
          Rereke Whakaaro

          ENSO is probably an internal oscillaton …

          I think you will find that the “probably” is what David is trying to avoid here. There are a lot of “probably’s” floating around as it is. Let us not introduce more.

          20

      • #

        David,

        What I think you need to consider is how the ocean handles the received energy from the Sun. The ocean of course doesn’t create its own energy (even though it does have a continuous (albeit small) input of geothermal heat from below), but it does possess the ability to effectuate a net storage or a net release of the solar input over years, decades and even centuries. The coupled ocean/atmosphere system is readily able to work towards holding solar energy back or expel it more efficiently to space (pressure systems >> wind strength >> latent heat flux >> convective efficiency). It is also perfectly capable of controlling how much solar energy is absorbed by the earth system in the first place (cloud cover, especially over key areas of the tropical oceans).

        The significant step down in the SOI in 1976/77, for instance, is basically what started the modern ‘global warming’ era.

        Yes, we can discuss what caused that major and conspicuous shift in SOI. It might have had an external (solar/lunar or other) source, or an internal one. We don’t know. But, it happened. That much we know.

        Your ‘physical model’ seems to assume (like the CO2 models) that the ocean is a static (unchanging), not a dynamic solar reservoir. It is not.

        The global climate is clearly driven by a combination of solar and internal (oceanic) variations, where the internal processes rule over decades and multidecades, the solar processes only beyond the internal multidecadal cycles (from one to the next).

        50

        • #
          David Evans

          The thermal lag and heat storage properties are included in the model via the low pass filter (only).

          The time constant of the low pass filter was found to be about 5 years, which is what others have found it, e.g. as Stephen Schwartz at Brookhaven said in “Determination of Earth’s transient and equilibrium climate sensitivities from observations over the twentieth century: Strong dependence on assumed forcing” in 2012, “The time constant characterizing the response of the upper ocean compartment of the climate system to perturbations is estimated as about 5 years, in broad agreement with other recent estimates, and much shorter than the time constant for thermal equilibration of the deep ocean, about 500 years.”

          That upper ocean compartment is the low pass filter. While there is obviously also longer term storage in the oceans, it doesn’t show up in the datasets we have of a few hundred years of surface temperatures.

          If we were to consider the oceans dynamic, we would not know what is an exogenous influence on the climate system. The modeling approach we are using treats stuff inside the black box as an internal mechanism. The exogenous forcings would be geothermal inputs and perhaps lunar inputs, but the internal state of the ocean is more like an internal mechanism.

          20

          • #

            David, you say:

            “While there is obviously also longer term storage in the oceans, it doesn’t show up in the datasets we have of a few hundred years of surface temperatures.”

            I beg to differ! It’s precisely what the multidecadal ups and downs in global temps are all about, the ocean cycles. The reason global temps went up from 1976 to 2001 is because of what happened in the Pacific Ocean in the wake of the shift in 1976/77 plus two massive West Pacific shifts in 1988 and 1998. The mean state of the Pacific Ocean. The PDV. Directly influencing the mean state of the North Atlantic (AMO). There is no need to explain anything else. Only three global shifts relative to NINO3.4 since 1970. Otherwise flat. The entire modern ‘global warming’ is contained within those three abrupt and significant hikes in mean global temperature level alone. And they are all readily explained by ocean processes. It’s a big topic. I have just started writing about it: http://www.okulaer.wordpress.com

            “If we were to consider the oceans dynamic, we would not know what is an exogenous influence on the climate system.”

            I see that, but then you end up with the ‘need’ to make up strange ‘nuclear’ reasons for multidecadal downticks in global temperature. There is no such need.

            “The modeling approach we are using treats stuff inside the black box as an internal mechanism. The exogenous forcings would be geothermal inputs and perhaps lunar inputs, but the internal state of the ocean is more like an internal mechanism.”

            The point I am making is that the (oceanic) ‘internal mechanisms’ of the earth system are not simply about rearranging (distributing) received solar energy, having no bearing whatsoever on the total amount over years, decades and multiple decades. Treating it like this assumes a static reservoir (receptacle) function.

            And then you will never get it right. Then you have missed the elephant in the room.

            No, the internal (oceanic/atmospheric) processes will increase or reduce the total energy content of the system over years, decades and multiple decades. In fact, it’s the internal processes of the earth system doing this, not the Sun through its output (TSI), which is remarkably stable. The Sun only works to influence (yes, control) the progression of the internal process regimes, most likely by affecting the pressure distribution/arrangement (and thus winds and clouds) across the global surface. A very INDIRECT influence, that is. Again, this might be where your ‘Force X’ is hiding. It’s the ocean/atmosphere system that actually executes the change in our climate (handling the solar energy in different ways) over human generations. It’s the ocean processes we ‘see’ in the global data.

            The Sun is the ultimate driver, but we can only see its real influence across ocean cycles, from one to the next. Like from the cycle ~1880-45 to the cycle ~1945-2010.

            David, I appreciate what you’re doing here, but we know all too well that it’s easy to become infatuated by your own model, ending up reading way too much into it and its output.

            00

            • #

              Kristain,

              You say, “I beg to differ! It’s precisely what the multidecadal ups and downs in global temps are all about, the ocean cycles.”

              That is for your model! David is examining how the Sun drives temperature, including drives the ocean. I find the consideration of the magnetic field reversal significant, and says much for a detailed engineering systems analysis. Else we just have another gobbed on GCM. It is likly part Solar with the epicycles about the solar system barycenter, plus conservation of momentum. We could expound on an active adaptive thermostat, (Gaia).
              Given the demonstrated near nothing of what earthlings understand, a viable alternitive to CO2 nonsense, may get some lukewarmers to admit “I do not know”.

              00

            • #
              David Evans

              Kristain, I think your argument is a fair point of view. One of those things that may get sorted out in the fullness of time.

              I don’t doubt that a lot of temperature change can be explained in terms of oceans. A parallel is the tight correlation of ENSO with temperature: ENSO is a really good predictor of the temperature in 6 months time. But what causes ENSO? Well one can predict ENSO pretty well with TSI (and implicitly force X). And what causes TSI? And so on.

              Any explanation involving oceanic influences on temperature is always going to beg the question: “well what influences the oceans?”. Eventually the full climate explanation will be in terms of exogenous influences: TSI, force X, lunar, human gases, geothermal, whatever,….

              00

    • #
      tolou

      Agreed. I´d like to see the solar model mixed with ocean oscillations (e.g. AMO & PDO) instead of a simulated WWIII. That might give some people ideas…

      20

  • #
    Tim

    Times of depressed solar activity correspond with historic times of global cold.
    Times of increased solar activity have corresponded with global warming.
    The current quiet-to-average cycles mean a cooling pattern forecast over the next few decades. So why the heat predictions?

    I’m just a Joe-Six-pack, but it seems that the downgrading of the sun’s effect on our climate by some ‘trusted wizards’ shows a selective bias that sticks out like the proverbial.

    70

  • #

    “the CO2 and solar models play together nicely. Assuming the climate system is linear for the small perturbations of the last few hundred years, the two solutions can operate almost independently”.

    Except that that begs the question as to whether the CO2 is causative of higher temperature.

    Wouldn’t it look just the same if solar causes the higher temperature and then the higher temperature causes an increase in CO2 from, say, ocean release of CO2?

    140

    • #
      David Evans

      Yes, except the faint ocean warming of the last century is not nearly enough to raise the CO2 concentration from 300 ppm to 400 ppm AFAIK.

      50

      • #
        crosspatch

        The curious thing with human CO2 emissions is the fact that the shape of the atmospheric rise has not matched the rate of human emissions increase. >25% of all CO2 emissions over human history have been released only since 1998. Human emissions have been fairly “hockey stick” shaped. Atmospheric rise has been roughly linear. There are other things at work, too, such as reactivation of bogs in the boreal continental regions as permafrost receded. In addition, the more CO2 you put into the atmosphere the faster nature scrubs it through both biology and geology.

        160

        • #
          the Griss

          CP, If you had been fed only stale bread for ages, and someone started putting up a decent amount of food in front of you.. What would you do.? 🙂

          40

      • #

        There are a number of possible reasons why a warmer world has more CO2 in the atmosphere but it is beyond the scope of this thread. I was just making the point that you don’t necessarily have to imply a warming trend from more CO2 on the basis of the above charts. It could still be all solar.

        132

      • #

        However, IPCC have NOT proven that the increased CO2 has the correct isotopic signature to infer a wholly anthropogenic (fossil) content so it must have (largely) come from somewhere in the (natural) global system. In AR4 they said:

        The high-accuracy measurements of atmospheric CO2 concentration, initiated by Charles David Keeling in 1958, constitute the master time series documenting the changing composition of the atmosphere (Keeling, 1961, 1998). These data have iconic status in climate change science as evidence of the effect of human activities on the chemical composition of the global atmosphere (see FAQ 7.1). Keeling’s measurements on Mauna Loa in Hawaii provide a true measure of the global carbon cycle, an effectively continuous record of the burning of fossil fuel. They also maintain an accuracy and precision that allow scientists to separate fossil fuel emissions from those due to the natural annual cycle of the biosphere, demonstrating a long-term change in the seasonal exchange of CO2 between the atmosphere, biosphere and ocean. Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope (Francey and Farquhar, 1982) and molecular oxygen (O2) (Keeling and Shertz, 1992; Bender et al., 1996) uniquely identified this rise in CO2 with fossil fuel burning (Sections 2.3, 7.1 and 7.3). AR4, ¶1.3.1, p. 100.

        But this is the big flaw Glassman pointed out in 2010:

        http://www.rocketscientistsjournal.com/2010/03/sgw.html#more

        Not one climate scientist (including Gavin Schmidt) has bothered to show how they would refute it!

        Following are four possible solutions to the (13C) mass balance problem.
        ACO2 ISOTOPIC FINGERPRINT IS NOT A MATCH
        # Parameter Value Source
        1 G0 762 AR4 Fig. 7.3, p. 515 C cycle
        2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
        3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
        4 r0 0.011028894 Eq. (7)
        5 δ13Cf -29.4‰ Battle, et al.
        6 rf 0.010789151 Eq. (7)
        7 k 50% AR4 TS p.025
        8 r(2003) 0.011009598 Eq. (12)
        9 δ13C -9.348‰ Eq. (6)
        10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p138

        IPCC provides all the parameter values but the one from Battle, et al. Those values with the equations derived above establish the ACO2 fingerprint on the bulge of CO2 measured at MLO, as if it were a well-mixed, global parameter as IPCC assumes.

        IPCC does not provide δ13Cf, the parameter found in Battle, et al., suggesting IPCC may have never made this simple mass balance calculation. A common value for that parameter in the literature is around 25‰. The figure from Battle, et al., being published with a tolerance, earns additional respect. As will be shown, the number is not critical. The result is a mismatch with IPCC’s data at year 2003 by a difference of 1.3‰, more than twice the range of measurements, which cover two decades.

        This discrepancy is huge, and is sufficient to reject the hypothesis that the surge in CO2 seen in the last century was caused by man. The CO2 added to the atmosphere is far heavier than the weight attributed to ACO2.
        CO2 SURGE IS TOO HEAVY TO BE ACO2
        # Parameter Value Source
        1 G0 762 AR4 Fig. 7.3, p. 515 C cycle
        2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
        3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
        4 r0 0.011028894 Eq. (7)
        5 δ13Cf -13.657‰ Eq. (12)
        6 rf 0.010962235 Eq. (7)
        7 k 50% AR4 TS p25
        8 r(2003) 0.011023529 Eq. (7)
        9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
        10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

        This computation is the first of three to examine other parameter values that would have rendered IPCC’s fingerprint test affirmative: ACO2 was the cause of the CO2 lightening. The isotopic ratio for fossil fuel would have had to be considerably heavier, -13.657‰ instead of -29.4‰, for the increase in atmospheric CO2 to have been caused by man.
        OR, ATMOSPHERIC CO2 IS OVER 1400 PPM
        # Parameter Value Source
        1 G0 2913.9 Eq. (12)
        2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
        3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
        4 r0 0.011028894 Eq. (7)
        5 δ13Cf -29.4‰ Battle, et al.
        6 rf 0.010789151 Eq. (7)
        7 k 50% AR4 TS p.025
        8 r(2003) 0.011023529 Eq. (7)
        9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
        10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

        For ACO2 at the stated rate and retention to have caused the small drop measured in atmospheric δ13C, the initial atmosphere concentration would have had to be 2,913.9 GtC, 3.8 times the figure used by IPCC. This is equivalent to 1,453 ppm of CO2 instead of 380 ppm.
        OR, 13%, NOT 50%, OF ACO2 REMAINS IN THE ATMOSPHERE
        # Parameter Value Source
        1 G0 762 AR4 Fig. 7.3, p515 C cycle
        2 g(2003) 133.4 AR4 Fig. 2.3, p. 138
        3 δ13C0 -7.592‰ AR4 Fig. 2.3, p. 138
        4 r0 0.011028894 Eq. (7)
        5 δ13Cf -29.4‰ Battle, et al.
        6 rf 0.010789151 Eq. (7)
        7 k 13.1% Eq. (12)
        8 r(2003) 0.011023529 Eq. (7)
        9 δ13C -8.080‰ AR4 Fig. 2.3, p. 138
        10 δ13Cfinal -8.080‰ AR4 Fig. 2.3, p. 138

        The mass balance will agree with the measurements if the atmosphere retains much less than 50% of the estimated emissions. The necessary retention is 13.1%, a factor again of 3.8 less than supplied by IPCC.

        These results apply to IPCC’s model by which it adds anthropogenic processes to natural processes assumed to be in balance.

        Instead, the mass flow model must include the temperature-dependent flux of CO2 to and from the ocean to modulate the natural exchanges of heat and gases. The CO2 flux between the atmosphere and the ocean is between 90 and 100 GtC of CO2 per year. This circulation removes lightened atmospheric CO2, replacing it with heavier CO2 along many paths, some accumulated several decades to over 1000 years in the past.

        The CO2 mass flow model is a mechanical tapped delay line.

        21

      • #
        Greg Goodman

        You may like to look at Gosta Pettersson’s papers on this question.

        http://www.false-alarm.net/author/gosta/

        He’s a retired sweedish chemist. It looks pretty thorough and he concludes about 50% of rise is out-gassing, the rest non-absorbed human emissions.

        00

  • #
    CC Squid

    Not only are you attacking the belief system called CAGW, you are attempting to put the super computer industry out of business. Congratulations! I am eagerly waiting for the model I can run on my system.

    150

    • #

      Although I have similar sentiments, it must be made clear that Evans and Nova are NOT attacking anybody.
      They are presenting a new model which they believe works well enough to share it with everybody.

      Although I had difficulties getting my head around this new model concept at the beginning, as each post was presented I understood a little better.
      Having read the current post (with the exceptionally well written foreword) I can say clearly that I have NO IDEA WHATSO EVER IF THIS MODEL WILL BE PROVED RIGHT OR NOT.
      I do however wish David and Joanne all the best. Thankyou for sharing this, I look forward to the remaining posts.

      300

      • #
        Rereke Whakaaro

        The interesting thing for me, is the the Solar model now sits as an alternative hypothesis to the CO2 model.

        The presence of a valid alternative, just by existing, totally changes the status of CO2 as the only driver of climate change.

        Hands up all those people who still want to spend billions of dollars, trying to diminish the level of CO2 — something that is proving to be very hard, and expensive.

        And hands up all those people who would prefer to pay nothing, whilst sitting back, relaxing, and watching the Sun do, whatever it is going to do, for free.

        60

        • #
          Olaf Koenders

          I’d like to watch the Sun engulf Earth in about 5 billion years, albeit from a safe distance..!

          But I agree totally with your sentiments. We finally have something that essentially disproves CO2 as the only driver, further discrediting the billions wasted on research into that area.

          Still, there will soon be suppositions from the CAGWIST arena that TSI in conjunction with increasing CO2 is the culprit, but that doesn’t really work, especially when TSI backs off.

          00

          • #
            Mark D.

            Olaf, with the ever progressive decay in morals, the sun engulfing the Earth might be just enough to sanitize this solar system and save the universe from some very awful STDs and super bugs from being spread around.

            00

        • #
          Greg Goodman

          “The presence of a valid alternative”

          You may be a bit premature is declaring this model valid, they have not even finished presenting it yet.

          Personally I find the “nuclear winter” fudge factor, which is a very significant part of the model, problematic.

          I think it would take less to convince me that CO2 was an significant factor.

          00

          • #
            steven mosher

            yes, given that we have actual physics that tells us about radiative transport through the atmosphere and zero physics about the X factor or nuclear bomb factor

            01

  • #
    crosspatch

    There is another thing operating during the “nuclear winter” period and that is industrial pollution that began to get phased out due to pollution regulations. For example, when I was a child in the eastern US in the early 1960s the cities produced huge amounts of smoke and soot. You could literally smell Pittsburgh, PA 50 miles before you got to it. The city was hazy from the burning of coal in the steel mills. The eyes would sting from the sulfur in the air. There was a significant brightening that went on in the Northern Hemisphere due to pollution regulations. The US “Clean Air Act” was passed in 1970.

    Coinciding with that curve for nuclear effects is a post-war re-industrialization of much of Europe and a tailing off that coincides with pollution regulations. I would be curious to know if there is a significant difference in that black line in the Northern vs Southern Hemispheres. And with the recent pause we are seeing a great industrialization in China, India, and Brazil, among other countries.

    80

    • #
      David Evans

      Yes, we wondered that but had no data to stick into the total climate model. Is the air pollution in Pittsburgh 1960 different from Beijing 2010? Perhaps it was, and could the differences by significant? We have no idea. Anyone?

      100

    • #
      CC Squid

      In Part V escaping heat you state, “That means any model needs to understand the relationship between changes in the temperature of the radiating layers and the temperature on the ground (and on the seven seas).” If sea level is increasing, then the pipes that allow energy to escape will now have to increase in size to allow more energy to escape. So the question about how Part VII takes this into account will require me to go back to the earlier parts of this discussion as well as reading Tisdale’s articles on El Niño/La Nina.

      10

    • #

      In the 70s a trip into Chicago meant entering the green zone. From outside going in the air looked green. This has been much reduced in the subsequent 40 years. I think the decline of the steel mills in the area has a lot to do with it.

      When I was about 10 (’54) my mother and I went through Pittsburgh by train. Seeing all the mills next to the tracks with the mills in operation was a wonder. I’m sure the emissions if seen by day would have been significant.

      20

  • #
    Frederick Colbourne

    Very impressive. I was very skeptical at first that a simple model like this could produce such a result.

    Whether or not the approach stands up to critics, the results are plausible. An ECS of 1.14 deg C and 40% attribution to GHG is also quite plausible.

    The result is consistent with the first version of a paper by Stephen Schwartz of Brookhaven Laboratory, Heat capacity, time constant, and sensitivity of Earth’s climate system. Schwartz S. E. J. Geophys. Res., 112, D24S05 (2007). doi:10.1029/2007JD008746

    As I understand the paper Dr. Schwartz estimated climate sensitivity to doubling of CO2 as 1.1 ± 0.5 K. He based his estimate on ocean heat content. His estimate was subject to several critical comments by other scientists and he published a revised figure for ECS, modified upwards.

    The value of ECS that gives you the best fit corresponds to the initial estimate by Dr Schwartz.

    40

    • #
      David Evans

      Schwartz uses the same two-compartment CO2 model as we do. Schwartz also arrived at a 5 year time constant for what is I think is his low pass filter, namely the upper ocean compartment of the climate system (which, as he notes, is “in broad agreement with other recent estimates, and much shorter than the time constant for thermal equilibration of the deep ocean, about 500 years”).

      However, as I said under the figure with the CO2-solar mix, “This mix was arbitrarily selected for illustration; do not read any significance into it.” So while an ECS of 1.14C is plausible on what we presented so far and models well from 1800 to now, it does not mean anything more in these posts.

      81

      • #
        crosspatch

        I think the way things respond are different in cooling and warming modes. The deep ocean might be faster to cool from atmospheric changes than to warm as cooling would work with convection while warming might work against it.

        Thinking about the Gulf Stream, what would happen if it warmed a bit? It would seem that it would simply travel a bit farther north before losing enough heat to sink into the surrounding water. This would act to transport more heat father north toward the pole but the water arriving in the deep ocean would be about the same temperature, at least initially. If the condition persisted long enough, the surrounding water would also warm and the downwelling point would move back south a little and then the water would be a bit warmer when it made its way to the deep. If the Gulf Stream were to cool, it would sink sooner, at first. This would act to transport less heat toward the pole until the surrounding water also cooled. So much of what happens in the system seems to want to be self-regulating. Add more heat to the system, it wants to shed more. Remove heat from the system, it wants to conserve it. There would also be changes in salinity that are important, too, that I ignored above. But the bottom line is that it is difficult to heat the bottom of a bucket by heating the surface. It IS easy to cool the bottom by chilling the surface. So I would intuitively expect the deepest ocean to act as a sort of thermal diode. It is easy to cool but harder to warm in response to surface temperature.

        100

        • #
          Sonny

          You’ve got it completely arse backwards. The ocean is heated primarily by the earth itself which is an incredibly hot rotating ball of life and magic.

          (Incredibly hot compared to the vacuum of space).

          If the earth was not so incredibly hot (with a “core” that is arguably as hot as the sun), and a thermal gradient of 25 degrees per 100km, Then all the oceans would be frozen solid (since any heating would just be sucked up by the core which would be close to the temperature of deep space).

          It is so true that the fish don’t notice the water, we are completely oblivious to this subterranean heat source and the possibility that its thermal output (similar to the sun) is variable!

          05

          • #
            Olaf Koenders

            That’s just silly, Sonny. Although it’s true to say that the oceans are warmed a little from undersea volcanic activity, the vast majority of the deep oceans (90% of ocean water) range between zero and 3C, with pressure being the only thing stopping them freezing at these depths and temperatures.

            Try freezing an unopened, pressurized bottle of soda water. The higher the pressure, the colder it needs to get.

            If we switch off the Sun permanently (as we invariably do every night), within a month the surface of all the oceans will be frozen and no life on land would exist. Eventually the entire oceans would freeze, as the circulation we currently have inverts, moving warmer water from the bottom to the top, which cools, sinks and cools the deeper water further.

            The reason we have ocean currents at all is because warmer water near the equator circulates to the colder waters at the poles, where that cooled surface water sinks to the bottom, taking up space and forcing cold water up from the bottom at the equatorial end (roughly).

            So there you have it – the evidence we have ice at the poles is due entirely to the lack of sunlight, not lack of heat from the mantle.

            Besides, I think your thermal gradient is wrong, which calculates the core as some 1600C. I believe the core is at about 6000C. Dunno where you got that number, if someone would like to correct me.

            00

          • #
            Rereke Whakaaro

            But the “core” of the earth is hollow, and has dinosaurs living in it.

            I know this, because I saw a documentary on TV, produced in 1959, that was based on the paper, “Journey to the Center of the Earth”. [c. 1871 (Fr.), J. Verne et al].

            00

  • #
    Eugene WR Gallun

    To borrow a phrase from an old song — Let The Sunshine In!

    Eugene WR Gallun

    30

  • #
    Brad

    So simple anyone can run it on a PC??? OMG, the industry Gods will bring hellfire down upon you!!
    Find a way to get it to all those kids in school for class projects, worldwide! A potential complete cutoff of future alarmists…:)

    You can bet you have the ears of the GCMS industry.

    60

    • #
      steven mosher

      hardly

      Many people have done low order models like evans with even more stunning hindcasts.
      hindcasts good to a millikelvin.

      low order models trained on global data that hindcast global data are a dime a dozen
      they tell you nothing.

      02

  • #
    Sonny

    David,

    Without taking away from the guts of your work so far, I can’t help but notice what I will call an “atomic bomb fudge factor”.

    The cooling effect of which you and Jo have lent your support seemingly because without it, your model is problematicalilly innacurate over that period of time.

    I think it is safe to say that had your model accurately hindcasted the temperature you would not have changed your initial scepticism of this cooling effect.

    In my book this is an illustration of a type of confirmation bias, I.e you are both invested in this model being correct, and the data which can be incorporated to support your bias is adopted and data which does not is scrutinised.

    I would caution you to be completely honest about any potential for confirmation bias, this will set you and your model apart from the arrogance and ignorance displayed by mainstream climate scientists who seem completely unaware of their own limitations and human frailties.

    61

  • #
    Sonny

    Actually, you do make this admission

    The weakest points of the notch-delay solar theory are:

    The assumption of sufficient linearity of the climate system,
    The need for the nuclear winter hypothesis to counteract the early part of the TSI plateau from 1950 to 2000, especially the 1960s.
    The inability to precisely identify force X (see Post IV).

    How refreshing to see! If only the government funded climate scientists, “teenage delinquents” as Donna Lamgtambois so aptly describes, could be upfront and honest about the problems in the CO2 theory, instead of using “nature tricks” to “hide the decline”.

    120

  • #

    > There is nothing in the posts so far to support the assumption that the recent global warming was almost entirely or even partly associated with solar radiation. On the material presented so far, the CO2 and solar solutions are both viable and no reasons have been given to suppose that either one is more influential.

    I agree that you haven’t shown any reason to prefer the solar model. However, the unfeasibly large influence from the nuke tests is a reason to not like the solar model.

    This is all black-box non-physical-model curve fitting. Where are the regulars on cue with the bit about fitting elephants?

    > model fits the measured temperatures reasonably well

    That’s pretty vague; you have no quantitative measure of fit; you’ve just got a by-eye “meh, looks OK”. What if it doesn’t look OK to someone else’s eye?

    744

    • #
      mesoman

      David has explained (several times) how this is a physical model and is not curve-fitting. I get the feeling that you only skim these posts before heading for the comments sections.

      234

      • #

        No; DE has *asserted* several times that its a physical model.

        532

        • #
          mesoman

          Thanks for taking the bait and acknowledging that you don’t have enough info on it to form a real opinion. Now, wait for him to release the code (gasp!) before you start tossing out criticisms of a model for which you have no solid details.

          281

          • #

            Make up your mind: either we have “no solid details” available, in which case DE has merely *asserted* taht its physical; or the details are available, in which case he’s *explained* it. Its not possible for both your comments to be correct.

            But this model is non-physical, because there are no physical constraints on the forcings: the scale factor that relates, say, the nuke wiggle to the obs line is entirely arbitrary, deduced only from wiggle matching. There is no conservation of energy, no physics at all.

            528

            • #
              Rereke Whakaaro

              You are using the argument from ignorance logical fallacy.

              224

              • #
                the Griss

                “You are using the argument from ignorance logical fallacy”

                What other choice does he have ! 🙂

                233

              • #

                No. I’m pointing out that his two statements are incompatible. It distresses me that there is so little skepticism being shown here – many commentators are just saying whatever comes into their head, with no thought for consistency; but you give a free pass to anyone you see as being on “your side”.

                59

              • #
                Mark D.

                Connolley, David and Jo have made the decision to roll this out in installments. Several people in previous posts said the model essentially couldn’t work. Now here is a demonstration of reasonable hind-casting ability something that current models don’t do very well at all.

                Your faux-distress and outrage is phoney and misplaced. Sit back and shut up until you really have something worthwhile to say.

                many commentators are just saying whatever comes into their head, with no thought for consistency; but you give a free pass to anyone you see as being on “your side”.

                Right! INCLUDING YOU!

                Again: Sit back and shut up until you really have something worthwhile to say.

                73

            • #
              Graeme M

              I am sceptical of CO2 driven CAGW and generally speaking find sceptical blogs informative and inlusive, however I’d rather see intelligent criticism of Mr Connolley’s arguments than kneejerk personal attacks. Not a good look I’m sorry.

              1016

              • #
                Rereke Whakaaro

                I’d rather see intelligent criticism … than kneejerk personal attacks.

                You are quite right, and I totally agree.

                But in defense of Mark D, I would like to mention that we have some considerable experience of the semantic tricks, and diversionary tactics, employed by some of the “more aggressive” proponents of CAGW, on this blog. I won’t even mention the melodrama, other than to point out that Mr Connoley is “distressed” by what he reads here; but still he returns.

                Of course the intent is to divert attention away from the core subject, and create lots of irrelevant sub-threads to distract readers, and in the process, drive some away.

                This behaviour is rooted in a belief that science is done by consensus (effectively a vote). Of course, that is a political concept, and not a scientific one. But if you read Mr Connolleys’ comments in isolation, the political derivation becomes obvious.

                After a while, he becomes repetitive (in approach) and more strident (in his choice of words). Eventually we all get frustrated, and some of us vent that frustration, from time to time. Regrettable, but part of human nature.

                32

        • #
          Gary Meyers

          Hey William, don’t you have some wiki articles to phony up or something better to do?
          I’m sorry, but you just annoy the heck out of me!

          183

        • #
          bobl

          Given that you are not an Engineer familiar with engineering modelling of physical systems, I think those that know better can safely ignore your ASSERTION that the model is non-physical.

          160

      • #
        steven mosher

        There is a simple test to tell you whether the model is physical or not.

        Is it dimensionally correct?

        Answer. NO.

        There is another simple test. Ask the model to predict outcomes that it wasnt trained on.

        A physical model of the climate ( a GCM) may be trained on average global temperature at the surface, But it can predict, for example, arctic amplification. Why? because it models the physics of a planet. It may be trained on temps at 2 meters, but it will give you a prediction at 10 miles above the surface? Why? because its equations are dimensionally correct. they are physics. davids model is non physical. It is a curve fit.

        01

    • #
      CC Squid

      Do not feed the troll!

      100

    • #
      ExWarmist

      Hi William,

      David Evans will be providing falsification criteria shortly.

      Perhaps you might be inspired to respond to my prior questions and prove that Catastrophic Man Made Global Warming is not pseudoscience.

      I’m presuming that you are (1) capable of answering the questions in a coherent, honest and rational way, and (2) willing to answer the questions that I have posed that go to the centre of scientific practice.

      21

      • #

        You asked if Global Warming is indeed Science. As I said before (well, previously you asked if it was Science, with an odd capital – I’m not sure what that was for. Now you want to know if its not-pseudoscience), yes, it is. However, your ideas of science may well be badly confused. People who have never done any often have naive ideas about it; just like anything else they have no experience of. I also think your knowledge of GW is also likely very deficient; so much so that its hard to know where to start.

        We could start with the construction of the GCMs, perhaps. They are physically-based models of the climate system (unlike DE’s model). They conserve energy and momentum; they track physically described changes. All of this is described in the appropriate papers, which you’ve never read or even attempted to find.

        Perhaps a better question would be, is your attitude towards studying GW a science-based one of honest, open inquiry? Or do you start from prejudices so firm that you won’t even look at what you pretend to be interested in?

        39

        • #
          ExWarmist

          My question is really simple.

          What are the falsification criteria for the hypothesis of man made global warming?

          [1] I.e. What quantifiable, measurable events, such that if they were to occur, would falsify the hypothesis of man made global warming?

          [2] Can you point to specific discussions in the scientific literature that articulate the falsification criteria?

          I am looking for refutation tests – not confirmation tests.

          For example – would a 20 period of cooling while CO2 was increasing, and other forcings could not account for the cooling be a falsification criteria

          Thanks ExWarmist

          30

          • #

            I don’t think your question is interesting, or indeed honestly meant; its a debating trick, not a question.

            A far more interesting question would be to discuss whether DE’s model is physical; but you’re all shying away from that. Mostly because you don’t even know what it means; but partly I think because you have a nagging suspicion that you already know the answer.

            But if you really want an answer, then pointing you at http://ourchangingclimate.wordpress.com/2014/02/17/is-climate-science-falsifiable/ via http://ourchangingclimate.wordpress.com/2014/02/17/is-climate-science-falsifiable/ is probably the best thing to do. No, I haven’t read it carefully.

            314

            • #
              ExWarmist

              Hi William

              You say…

              I don’t think your question is interesting, or indeed honestly meant; its a debating trick, not a question.

              My question is completely honest. I really do think that climate science – as a discipline – is corrupted at the process layer, that it is operating in a defective framework that is insufficiently rigorous to arrive at the facts wrt climate – i.e. it has become a dead end discipline. The reason that I ask it of you and other adherents to the main stream position on man made global warming is to ascertain if you have considered the methodological framework by which the published content of climate science has been arrived at.

              A far more interesting question would be to discuss whether DE’s model is physical; but you’re all shying away from that. Mostly because you don’t even know what it means; but partly I think because you have a nagging suspicion that you already know the answer.

              Easy – specify the criteria that would be required for a model to be Physical – and we can see if we have the same understanding of what a physical model is – and if David Evans model conforms with that criteria.

              But if you really want an answer, then pointing you at http://ourchangingclimate.wordpress.com/2014/02/17/is-climate-science-falsifiable/ via http://ourchangingclimate.wordpress.com/2014/02/17/is-climate-science-falsifiable/ is probably the best thing to do. No, I haven’t read it carefully.

              I’ll check it out.

              71

        • #
          Rereke Whakaaro

          Well, the whole of Mr Connolley’s comment is pure ad hominem, with a touch of bombast.

          The second paragraph states: that the CGM’s, “… are physically-based models of the climate system (unlike DE’s model)”. This is a very strange statement. There are two definitions for “physically-based” in relation to models.

          The first, and the most usual, refers to control systems that model a clearly defined physical network in real time, such as an electrical distribution network, or an air traffic control system. The key characteristic of such models is that the state of every node on the system is monitored, and continuously analysed, relative to other nodes on the network. Clearly CGM’s do not share these characteristics, since climate “nodes” are not clearly defined, nor always understood.

          The second definition refers to theoretical models that are derived from the laws of physics, presumably after applying occams razor. This is essentially what climate models should be, and this is exactly what David Evans is doing. Except that Mr Connolley asserts that he is not. Given the circumstances, I would suggest that this is an example of “Pious Fraud”, since Mr Connolley has joined this congregation to bolster his own belief system.

          But it does not stop there. Mr Connolley goes on to say: “They conserve energy and momentum; they track physically described changes.”. This is the fallacy of “Begging the question”, because the current CGM’s are bottom up, so they use the conservation of energy and momentum as necessary components within their calculations. David’s model appears to be more top down, and based on an analysis of the observed natural variations of input, in order to identify the key drivers of change.

          Mr Connolley’s fallacy exists, because both approaches are valid, but in different circumstances. The CGM’s assume that CO2 is the primary driver, and that everything that needs to be known about the mechanisms is already known with some variances, that cannot be explained without assuming that the input data is in error.

          David’s approach is to treat physics as a mathematical problem (not physical, as Mr Connolly asserts) that is treated in an investigatory way, to see where it leads.

          It is this last point that Mr Connolley most fears, because he, and the established Climate Scientists, appear to be loosing control of the narrative, and if this new model turns out to be more accurate in its predictions, than the CGM’s, then there will be some explaining to do.

          61

    • #
      Rogueelement451

      I think you will find the vast majority of regular writers reading and trying to understand what is in front of them.
      Most people are neither mathematicians nor scientists so it is important for an understanding to take place before commenting.
      Since this is a sceptical site , I am sure there is likely to be a lot of comment good and bad down the line but unlike yourself, we tend to keep our powder dry until such time as we have a clear shot ,instead of firing off at all and everything.
      Is it not possible for you to keep your gob shut for just one day?

      20

  • #
    NikFromNYC

    (A) If atomic tests resulted in massive climate fluctuation that would help explain away the three decades of postwar cooling, why hasn’t the alarm raising crowd adopted it too? You say they do but offer no references to something I’ve never heard of, just a link to Weather Underground estimates of an actual nuclear winter. The cooling was blamed on pollution aerosols, mainly, but here you even invoke radioactivity but likely have added a fudge factor in how you scale it to make your model fit, yet another parameter. Why not blame cooling on normal pollution like everybody else does?

    (B) “The theory of the “delay” will be tested soon. It is falsifiable.” No, it’s falsifiable *now* by treating the last few decades as test data or the first few. It’s confusing what is being shown. Are these all only trained by the 1850 to 1978 period which you say in comment that is “mainly” trained on this period? This is what Willis keeps asking for, which I appreciate. Lubos also asked for a test based on simply reversing the temperature trend in time as test data to see if the model is simply arbitrary in matching any data with similar characteristics. It seems the nuclear option would prevent that though since it locks half the variation in climate to a hand waving argument. That a natural sudden plunge in temperature may occur at any time is indicated already in the main Greenland ice core, but I wonder how well that will in fact “confirm” your model if it now happens again since such drastic plunges are so common and so unexplained.

    http://i61.tinypic.com/2cxbxw4.jpg

    (C) You say you have an algorithm that translates solar output to temperature in some way that also involves a “force X,” *but* will it do so uniquely or just arbitrarily and is there any actual meaningful relationship between solar output and temperature variation? We obviously have that correlation with the Little Ice Age already and a solar lull but what about recent minor fluctuation and actual temperature rises versus falls? Because when I look at TSI and temperature and connect the peaks and valleys with plot lines to ignore the 11 year cycle, I don’t think I see any correlation at all. This whole exercise just seems like pattern matching that is entirely arbitrary and thus entirely meaningless since it could also match the stock market in either direction, or explain solar variations as being caused by climate on Earth. This is why Willis is so concerned about a potential PR disaster, rightfully so. There is, after all, a multi-hundred million dollar a year PR effort precisely focused on stereotyping climate model skepticism as being just Internet cranks and here you are avoiding real peer review on the Internet as Steve Goddard’s blog fills up with Iron Sun and Ancient Gods crackpots. That you failed to anticipate both Willis’ and Motl’s pointed criticisms about arbitrary wiggle matching means you failed, not them, to present a clear picture of what you were doing in something shorter than 170 pages that obviously did not include the usual tests of uniqueness that any peer reviewer would demand for such a radical new theory. Now that you reveal the nuclear option, the laugh test must be also applied.

    107

    • #
      crosspatch

      ” If atomic tests resulted in massive climate fluctuation”

      I think the phrase “massive climate fluctuation” is a bit of hyperbole here. He is saying that it acted to moderate warming, not saying it caused “massive fluctuation”. But adding a lot of dust into the stratosphere that takes a long time to settle out, it likely did reduce the overall clarity of the stratosphere. That the atmosphere “brightened” since the 1970’s is rather well documented in many papers.

      For example: http://wattsupwiththat.com/2013/06/06/solar-gains-in-spain-may-cause-warmists-pain/

      70

    • #
      Andrew McRae

      The chinks in the armour are gradually showing.
      * The filter can’t represent the frequency shifting that occurs due to thermal capacity of the oceans.
      * The energy in the 11-year variance can’t really just disappear into nothingness, but with a simple filter model it can – First Law be damned.
      * The 11-year delay is postulated, not discovered, simply due to the anti-Ockham assumption of a Notch Filter, even though the IMF Bz zero transition can provide an immediate Svensmark-style albedo shielding of the TSI peak at mid-latitudes with no delay required.
      * There’s no OLR, and surface temperature is the only testable quantity it outputs, and it needs the Atomic Playboy parachuted on-demand into the 1950s to even get that far. The downturn of one of numerous documented ~60 year cyclic climatic influences would generalise to longer timespans.
      * Withholding the details from the critics is backfiring. One minute we’re told “Our climate model is in a spreadsheet that we will be releasing shortly.” Three hours later we’re told “This series of blog posts is far from finished :)”. Committing to a specific release date would save us the drama.

      I won’t say it’s turning into a train wreck, but a brief derailment incident has already occurred, the scale of media coverage being the only free variable.

      10

  • #
    Sonny

    What I don’t understand is why we should use a nuclear bomb fudge factor when there is overwhelming empirical evidence of geothermal heat flux (especially at the poles).

    Arctic Ocean Warming Contributes to Reduced Polar Ice Cap
    http://darchive.mbl.edu/bitstream/handle/1912/4345/2010jpo4339.1.pdf?sequence=1

    There is no legitimate TSI model that can explain why deep waters in both the Arctic and Antarctic are significatly warmer (2 to 3 degrees) than surface water!

    And there can be no TSI based explanation as to why up until 2007 this water at a depth of 200m to 300m warmed so dramatically!

    David and Jo, I implore you to create a similar model that includes both TSI and this unknown heating element “factor x” that is:

    1. Causing inexplicable warming of polar waters.
    2. Is causing cyclical volcanic activity based on 11 year solar cycle.
    3. Is causing cyclical earthquake activity based on 11 year solar cycle

    This will be the next major leap in understanding of the earths climate. The eat source is quite literally “right under our nose”.

    The TSI notch-delay model MUST identify the physical cause for the inadequacy of the TSI model which necessitated the “notch-delay” methodology, and the “nuclear bomb fudge factor”.

    12

    • #

      Sonny, I’m all for testing out other possibilities which make the model more accurate (you’ll be able to try it yourself soon).

      You’ve called the contribution of the atmospheric bomb tests a fudge factor but you haven’t actually provided any reasons why 440Mt of explosions wouldn’t have some cooling impact. The question then is how much of an impact is reasonable. We provided two papers with estimates in the same ballpark.

      80

      • #

        Owing to the laws passed during the period and Great Leap Forward during 1958–1962, according to government statistics, about 36 million people died in this period.
        Until the early 1980s, the Chinese government’s stance, reflected by the name “Three Years of Natural Disasters”, was that the famine was largely a result of a series of natural disasters compounded by several planning errors. Researchers outside China argued that massive institutional and policy changes that accompanied the Great Leap Forward were the key factors in the famine, or at least worsened nature-induced disasters.[8][9] Since the 1980s there has been greater official Chinese recognition of the importance of policy mistakes in causing the disaster, claiming that the disaster was 30% due to natural causes and 70% by mismanagement.

        The hydrologic system of Long Island, N.Y., showed a marked response
        to deficient precipitation in the years 1962-66. By 1966, streamflow was the
        lowest of record in many Long Island streams, and ground-water levels had
        declined a maximum of about 10 feet in the central part of the island. Although
        the drought apparently ended in the early months of If 77 and
        ground-water levels and streamflow recovered somewhat since then, groundwater
        levels and streamflow were still considerably below long-term average
        values in September 1968.

        Table 2 ranks droughts for the period 1915-1991 based on streamflow records from six long term gaging stations. Three droughts–1930-1934, 1952-1955, and 1962-1964–are notable for their severity, duration, and widespread impact. For four out of the six stations, the 1952-1955 drought was the most severe on record. An examination of rainfall records by Knapp (1990) indicates that the 1893-1895 drought may have had as extensive an impact as the 1952-1955 drought in many parts of Illinois. However, there are no streamflow records for this earlier drought.

        1960’s Drought
        Drought occurred across SE Australia from 1962
        to 1968. This drought varied in extent
        and severity over different years. Some places were as dry as the Federation Drought.
        Both Hume Weir and Burrinjuck Dam were dry in 1965, and there was widespread soil
        erosion and stock losses.

        n 1962, the United States carried out a series of high altitude nuclear tests called Operation Fishbowl. The tests were a response to the Soviet Union’s announcement they were ending a moratorium on nuclear testing. The planning for these tests was rushed and resulted in many changes as the program progressed.

        All the tests of Operation Fishbowl were launched from missiles from Johnson Island, just north of the equator in the Pacific Ocean. The missiles were launched toward the southwest of the Island to keep the detonations as far from Hawaii as possible. This was because the planners were worried that the bright nuclear flashed might cause blindness or permanent retinal injury.

        The final four tests were all success. On October 19, 1962, a test codenamed Checkmate was successfully detonated at an altitude of 147 kilometers (91 miles) Although the exact amount is classified, it was reported as being under 20 kiloton explosion. The fourth Bluegill test successfully took place on October 25, 1962. Although the exact size of Bluegill Triple Prime is classified, it is believe to be between 200 and 400 megatons. The following test, Kingfish, took place on November 1, 1962 and is believe to have a yield in the same 200-400 megaton range. The last test of Operation Fishbowl was codenamed Tightrope. It took place on November 3, 1962 and was detonated at a much lower altitude. The nuclear yield was only between 10 and 20 megatons.

        10

        • #

          You sure about 200 to 400 megatons? That would make them several times as large as the Russian Tsar Bomba of 57Mt.

          10

          • #

            Yeah those are typos. They must mean 200 – 400 kt. Sorry I didn’t pick that up. Actually according to Wikipedia it was 400 kt each. I saw the effects of the Bluegill Triple Prime explosion in the early hours of 1 November in Auckland, New Zealand (aged 14). It created a multi-colored aurora over the whole sky (aurora australis are not seen at that latitude).

            The points are that:

            the US ran numerous high altitude tests through 1962 at the Johnson Island location; but

            it can’t have been dust which caused the resulting widespread global cooling/drought as these explosions were not just above ground on a continent (to generate dust).

            10

      • #
        Sonny

        Jo, nuclear weapons are a wartime propoganda hoax

        Nuclear weapons (of the kind that wipe out cities) do not exist.

        When somebody can explain to me how cameras were set up which captured nuclear explosions at close proximity to houses they were filming, I will reconsider.

        There is ample evidence refuting the existence of nuclear weapons.
        But I will not divert this thread into a debate about it. If you are interested, google it. Climate change aunt the first time we’ve been hoaxed and it won’t be the last.

        I’m afraid to say that the model does not just incorporate a “fudge factor” it incorporates a “fictional fudge factor”.

        Just as “climate change” is used to promote fear in the population, the nuclear bomb hoax did this in the late 40s 50s and 60s to a FAR GREATER EXTENT.

        The motive to lie about having such weapons is obvious.

        Hiroshima and Nagasaki? Fire bombed.

        013

        • #
          Peter Yates

          If you search YouTube for videos of ‘nuclear explosions’ you get .. About 75,200 results.
          If they were all “fire bombs [like] Hiroshima and Nagasaki” I am a camel that can pass through the eye of a needle.

          YouTube comment by ‘Summer Winter’ :-
          “According to Google search [the cameras] were hidden behind barriers and bunkers, or shot from far away with a zoom lens. .. The interesting thing is that the close up footage is only from low yield bombs(20-40kt) and that any higher just vaporized everything in the blast zone, and that’s why there is no footage of bigger bombs except from far away.”

          We should not ‘feed’ you anymore.
          Enough said.

          11

          • #
            Sonny

            Peter there are probably as many videos about vampires and Dracula on YouTube.
            It does not mean its real. there are certainly 10s of thousands of fake movie explosions that look pretty real to me as well! And your comment about the cameras is absurd.

            Famous nuclear test footage
            http://m.youtube.com/watch?v=wA8z94MXo9M

            Everything is blow to smithereens except for soldiers crouched in their bunker and the cameras up a pole with some guide ropes. Yeh right!

            HOAX

            02

        • #
          Rereke Whakaaro

          Sonny,

          I am not going to argue over the propaganda films, which were made to calm the fears of the US population over nuclear testing. When seen by todays standards, they are awful anyway.

          The important point, from a climate perspective, is what atomic and nuclear blasts do to the different layers in the normal earth atmosphere.

          At the time when America was conducting atmospheric tests in Nevada, and later at Bikini Atoll, most long range radio communication was transmitted in the High Frequency Band, somewhere below 30 MHz. Signals in this band, would bounce off the Ionosphere, and the ground, so reflecting the signal around the Earth’s curvature, giving the long range.

          During the tests, the Ionosphere (and other atmospheric layers), was seriously disrupted and long-range communications were lost, and had to be resent when the atmosphere finally calmed down again. That often took several days. But, when it did calm down, the actual reflective capacity of the ionosphere was still greatly reduced, due to the quantity of dust particles that had also been carried to high altitude by the atmospheric disruption. It was only over time – sometimes months – that the actual signal strengths improved to the levels they were previously.

          Depending upon particle size, and the height to which the disrupted winds carried it, it is not unreasonable to conjecture that the atmospheric dust would also have the effect of shadowing the earth from the sun.

          10

          • #
            Greg Goodman

            You are correct but something a bit stronger than conjecture is needed to justify 0.5K sized effects.

            Compared to 0.7K / century warming, that’s huge.

            The authors clearly know this pushing the limits of credibility when you read how they present it.

            00

          • #
            Sonny

            Thanks Rereke, do you have more information on this frequency disruptions e references?
            Can this disruption be achieved by
            What other evidence convinces you they are real ?

            00

            • #
              Rereke Whakaaro

              Hi Sonny. It was a long time ago, and in a country far, far, away.

              I was serving in the military, and although we had all of this fancy high-tech electronic communications gear that could auto-encrypt, and self correct errors, etc., we still had to learn how to encrypt manually, and how to send and receive radio communications via Morse code (at 25 words per minute), because being a simple on/off signal, it would get through all the noise, when more sophisticated signals could not.

              And it wasn’t solely Nuclear blasts that were the problem. In my time, we experienced a couple of large solar flares, that also disrupted long range communications. That wasn’t put down to a disruption in the ionosphere per see, but rather a change in the intensity of ionisation.

              Historical books on military communications, for the period around the 1970’s would be where I would look for some of the more technical points. It really wasn’t my field, but rather an adjunct subject that we all had to study.

              Sorry I can’t be more help.

              00

          • #

            During the tests, the Ionosphere (and other atmospheric layers), was seriously disrupted and long-range communications were lost, and had to be resent when the atmosphere finally calmed down again. That often took several days.

            Which reminds me – low SSNs tend to mean poor HF radio communication. At a high sunspot maximum the MUF (maximum useable frequency) can go to 50 MHz or higher for short periods (hours, days).

            When the SSN count is low sometimes the MUF only gets up to 3.5 or 4 MHz. And 80 meters gets crowded.

            Interesting page on the subject. Includes current SSNs and 10.7 signal. And maps.

            http://prop.hfradio.org/

            00

      • #
        Greg Goodman

        440Mtn sounds “big” but what does it mean. Why would it have a cooling effect. Again, a mechanism is needed.

        Since some fools have taken to measuring AGW in “hiroshima” units we realise that this kind of energy is peanuts on a global scale.

        The nuclear winter concept is based on ground attacks that would raise enormous amounts of material into the atmosphere. That does not really apply to airborne tests.

        I spent quite a lot of time trying to detect some change in temps that coincided with some of the major test dates and could find nothing.

        If you want to include this in the model you really need provide evidence to reject the null hypothesis that there is not effect.

        “..why 440Mt of explosions wouldn’t have _some_ cooling impact”

        Sounds a bit like Trenberth wanting to reverse the null hypothesis on AGW.

        Having looked at this myself expecting to find something and having found nothing, I would concur with the comment that this is a fudge factor unless you can produce some concrete evidence.

        11

  • #

    Interesting to see some of the naysayers focusing on the throwaway comment by David about human induced cooling in the mid 20th century (the nuclear winter hypothesis) when they supported that idea at the time and still say that human influences are the primary climate driver.

    In fact human influences were not necessary then and are not necessary now.

    The mid 20th century cooling was a result of slightly weaker cycle 20 plus a negative PDO.

    Move on.

    124

    • #
      Sonny

      I am a fly on the wall. But a fly with an extraordinary education and ability with mathematical modelling. For a long time I have been told what I believe is a lie. I have been told (as have all the other flies in the house) that by flapping my wings I am creating extra energy that is heating the house up. The humans in control have determined that to reduce this heat they plan to introduce Mortein into the air.

      I have a different theory as to why the house is heating up. I think that it is actually a combination of the temperature outside the house increasing and an increase in light coming through the windows.

      I have had a massive breakthrough in identifying a 24 hour cycle! It seems that there is a reasonable correlation between the daily solar cycle and the temperature in the house!

      The problem is that the correlation is not as strong as I would have hoped. It seems that at about midday, when the TSI coming through the windows is at its peak and the outside temperature is highest, there is actually a drop in the temperature inside the house. What I have done in order to get around this is introduced a “notch” filter. Easy.

      There are also some other strange delay affects and other oddities, for example at 10pm suddenly there is a slight increase in temperature again, and then a delay as the house temperature slowly cools again after 4am and then starts to warm again as the sun comes up.

      Never mind, this just means that there is an “x force” which is some misunderstood and mysterious aspect of TSI through my window and the temperature outside.

      I now have a “physical model” with a few bells and whistles on it which presents a completely alternate and equally plausible theory to the “wing beating” theory, and I’m hoping I won’t get hit with a face full of Mortein.

      Now to work out why the humans keep complaining about a “cooling bill”.

      146

      • #

        “It seems that at about midday, when the TSI coming through the windows is at its peak and the outside temperature is highest, there is actually a drop in the temperature inside the house. What I have done in order to get around this is introduced a “notch” filter. Easy.”

        If you add an automatic system of blinds that responds to TSI it can swing to a shading position when the TSI gets to a certain point. That won’t actually have a cooling effect but it will offset the further warming from even higher TSI.

        Using David’s method of analysis that cooling effect of the blinds will show up as a notch offsetting the higher TSI, at least for a while.

        So your analogy is not sound.

        Out in the real world the sun actually reduces the blinds (less clouds) but that opens up the ocean surface to more incoming energy which disappears into the oceans for a while.

        Again, that produces a notch as the extra energy reaching the surface is retained by the oceans temporarily (hence the delay) offsetting the extra incoming energy from less clouds.

        So, if you see a notch you know you need to find a cause.

        That is all David is saying at the moment.

        173

        • #
          Brad

          Well said Stephen.
          The house reference opens another fallacy Sonny implied: You CANNOT model or analyze systems where humans (operators) can provide input at any time with no record of what they did.

          Analyze Climate Change without human input, then go back and look for anomalies and investigate if some other natural event could have caused them. Only after eliminating all possible natural causes should you look at what humans could have done to cause it.
          Starting with supposed human interference, as the current GCMs do, is and always will be a waste of time.
          Especially when the modelers salary and egos are dependent on the outcome.

          141

          • #

            Starting with supposed human interference, as the current GCMs do….

            No, they do not. GCMs are run with and without human influences. Some graphs of the results are shown in the IPCC 5AR WG1 Ch10 pg 879.

            http://www.climatechange2013.org/images/report/WG1AR5_Chapter10_FINAL.pdf

            They find that modern warming cannot be accounted for by natural forcings alone.

            630

            • #
              Sonny

              David,

              That implies we understand and can qualify and quantify all natural forcings.
              If you believe this, go ahead and shut your brain down. Everything there is to know we already know…

              Or in other words “don’t you worry your little head , you just leave the thinking to the grown ups and go play with your toys”

              231

            • #
              the Griss

              They find that modern warming cannot be accounted for by natural forcings alone.

              David Evan has just shown that it CAN be account for using ONLY solar forcing.

              This will really hurt the CO2 haters like yourself.

              ENJOY !! 🙂

              202

              • #
                Sonny

                The Griss, the guys who know whats actually cooking dont hate CO2.
                They just love a good old profitable hoax. And CAGW is up there with fractional reserve banking. Either way youre screwed. You can bring up the truth in only so many different ways until you realise that it is LIES that are the true currency in our economy.

                140

            • #
              Lionell Griffith

              Both the computers and the programming of the GMCs had a huge and very significant amount of human influence applied to hem. It is thereby legitimate to question what effect that might have on the final results. One answer is easy: without the human influence there would be no results from the GMCs. So we clearly have an AGMC and maybe even a CAGMC.

              100

            • #
              David Smith

              In short, we have demonstrated that the global warming of the last two centuries could have been mainly associated with TSI rather than CO2. This overcomes one of the bedrock beliefs of anthropogenic global warming, namely that the recent global warming could not plausibly be due to anything other than CO2.

              Ouch! That’s gotta hurt hasn’t it Mr Appell?

              131

            • #
              Rereke Whakaaro

              … GCMs are run with and without human influences.

              Easy to do with a model. You just identify all of the parameters that anthropogenic, and set them to zero, run the model, and see what you get.

              Alternatively, you can make up numbers for the anthropogenic parameters, and set everything else to zero, run the model, and see what you get.

              Finally, you can fiddle with the parameters between those two limits, until you get something that reasonably matches whatever political spin you want to put on it.

              Hey, I don’t have a problem with that. I just wouldn’t model an aircraft design that way.

              00

        • #
          bobl

          Even simpler, depending on where the windows are in the house and the nature of the windows it is possible that the windows switch from transmitting light to reflecting it. That increase at 10 PM could be a classic wind drop (convection cooling change) with a slight overshoot of the house temperature as it comes back into equilibrium, or maybe it’s manmade, like the hot water systems waste heat kicking in.

          Personally my house does a big kick up at about 9PM in winter when the energy loss of the house to it’s surroundings becomes so bad that I switch on the heater. Where’s that global warming those climate scientists promised when you need it.

          NB, strangely that kick up in temperature is occuring later at night recently, I have detected a unmistakable correlation between that and my gradual descent into fuel poverty as the government insists on 13.7% electricity price increases. Soon this effect will be so bad that it will be economic to replace electric heating with a more ancient device designed to increase the earth’s CO2 content more directly.

          70

    • #
      the Griss

      I also suspect there would have been quite a lot of particulate matter in the air around the 1940’s from WWII. Would this also cause a slight cooling ?

      I’m noticing that the solar model is producing a peak at around 1950, which seems about 10 years later than it should be.

      If the size of Force X was related to the following TSI peak, that might shift the 1950 peak to 1940. Then include some wartime cooling in with the nuclear stuff

      50

      • #
        Glen Michel

        Quite right Griss, lots of oil and diesel fuels were destroyed, also rubber.The amount of human activity on the easternFront alone was phenomenal that would conceivably create a signature.So many contributing factors much can’t be measured or determined.

        10

      • #
        David Evans

        The airborne muck from WWII probably wouldn’t have made it up to the stratosphere, so it would get washed out in a year or two I expect.

        20

        • #
          the Griss

          My point is , David , that pre-adjustment records show the peak at around 1940, ours is occurring in 1950, You also seem to get a peak around 1991.

          I’m just trying to think what might cause these two particular aberrations, because apart from those, things look pretty darn good. 🙂

          20

          • #
            the Griss

            ours => yours..

            10

          • #
            David Evans

            It’s a good idea that hadn’t occurred to us. Nice if true.

            Yes, the worst part of the fit is either side of the nukes or whatever, 1950 and 1990. Any ideas welcome,

            30

            • #
              Greg Goodman

              Have a look at how Hadley “correct” ICOADS SST. This gets adopted by other data correction agencies, like ERSST.

              Have a look at CET temp record.
              http://climategrog.wordpress.com/?attachment_id=974

              Look at SST against some other physical records:
              http://climategrog.wordpress.com/?attachment_id=215

              It seems from other records that post-WWII SST adjustments were applied starting about a decade too soon.

              The whole process was very speculative and has more to do with personal bias of Folland et al than with measurements.

              I did an article about this on Curry’s site.

              One problem in any attempt to model climate now, is that the historical records have been manipulated (oops, I mean corrected) to the fit the AGW meme and GCM output.

              10

            • #
              Greg Goodman

              http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/

              I had an interesting discussion with John Kennedy of Met Office in comments, though the threading at CE is pretty hard to follow sometimes.

              He said models were “tuned” to fit 1960-1990. And discussion of supposed “validation” revealed that most of it was circular logic geographically insignificant comparisons suffering from sample selection bias.

              Frankly its a mess.

              Something really stupid was done early on ( a 0.5K step “correction” in 1946 ) , rather than admit the error and fix it, they’ve been trying to paper over the cracks ever since.

              11

              • #

                > John Kennedy… said models were “tuned” to fit 1960-1990

                Searching for “tuned” I can find JK saying (to you) “Your later explanation that the models have been tuned to fit the global temperature curve (reiterated in a comment by Greg Goodman on March 23, 2012 at 3:30 pm), is likewise incorrect.”

                That appears to be close to the opposite of what you’re claiming he said. I can’t find JK saying what you are now claiming he said. Can you quote him exactly, please.

                35

              • #
                Greg Goodman

                John has the integrity to hold an honest discussion whilst defending his position.

                That makes him worth discussing climate science with.

                You do not have that integrity.

                Since the 80s, until about 5 years ago I was actively supporting the environmental movement. Now the mere words are enough to get my hackles raised.

                People like you are the reason for that.

                Reflect.

                31

              • #

                You claimed JK had said something. I looked at the post you referenced, and found that he’d actually said the opposite. So I asked you to back up your claim with a direct quotation, in case I’d misunderstood.

                Instead of backing up your words, you’re just throwing up squid ink; that’s dishonest.

                33

            • #
              Greg Goodman

              re 1990 , look at the following:

              http://climategrog.wordpress.com/?attachment_id=902

              http://climategrog.wordpress.com/?attachment_id=955

              Stratosphere warming is roughly complementary to the troposphere cooling. The second graph shows extra SW solar making it into lower climate since Mt Pinatubo settled.

              Follow the link therein, for the full story and derivation of the radiative anomaly.

              My interpretation of this effect is that after major eruptions other aerosols, either industrial pollution and / or ozone get flushed out with the volcanic aerosols, leaving the stratosphere more transparent.

              There was a post 1990 warming caused by the eruptions this is usually spuriously attributed to CO2.

              If you remove CO2 this you will probably need to account for this.

              10

              • #
                the Griss

                If the stratosphere is always measured at the same height, its temperature will always go the opposite way to the temperature of the troposphere.

                00

              • #
                Greg Goodman

                Is the stratosphere always measured at the same height?

                does this explain the change in TOA SW that matches the temp. change?

                00

            • #
              Greg Goodman

              PS the detail is here:
              http://climategrog.wordpress.com/?attachment_id=884

              I show that tropics are very insensitive to radiative changes and that current aerosol forcing is being rigged to much less than earlier more rigorous values in order to make models works with higher sensitivity.

              This is why I suggested the apparent notch filter may simply be insensitivity to radiative change.

              The climate is not just removing 11y solar it’s pretty much removing most of it.

              There may well be a long term solar effect similar to what you suggest. This is deep penetrating UV that gets past the negative feedback effects in the surface layers.

              10

        • #
          crosspatch

          Stratospheric injection wasn’t a big deal in WWII except some soot from airplane exhaust. Real problem was SO2 from coal burning that caused a constant tropospheric haze that only started to clear in the 1970’s and is probably occurring again due to China, India, Brazil industrialization.

          10

  • #
    Brad

    Sonny,
    Is that supposed to be a supportive or sarcastic post? I am sure it tickled you regardless…:)

    20

    • #
      Sonny

      Hi Brad,

      Can it be both? 😉

      40

    • #
      James Bradley

      I like an analogy that describes a complicated process in simple terms.

      40

      • #
        CC Squid

        Funny.

        Definition of analogy (n)
        a·nal·o·gy[ ə nálləjee ]
        comparison: a comparison between two things that are similar in some way, often used to help explain something or make it easier to understand
        synonyms: similarity · likeness · equivalence · parallel · correspondence · correlation

        10

        • #
          James Bradley

          Many apologies, CC Squid,

          I like an analogy because it describes a complicated process in simple terms.

          10

  • #
    Matty

    If models are only tools why do 97% of climate tools agree ?

    60

    • #

      Only 3% hit the nail?

      50

    • #
      Lionell Griffith

      The similarity of their results proves nothing about the system they are said to simulate but it does prove something about themselves. They were programmed based upon similar assumptions so their output will by necessity be similar. All it takes is one significant non-similarity between the climate models and the actual climate system to falsify all of them.

      How about a significant change in atmospheric CO2 over the past nearly 18 years with no significant increase in temperature for just such a test. It appears that the climate models all fail without additional “tuning” to adjust for the discrepancy. Hence, they are invalidated as they stand and cannot be relied upon to forecast/predict/project/simulate the effect of CO2 within our actual climate system.

      We have the failure of those who assert “the science is settled” compared to David’s restart of the science from the top down. He, unlike the CAGW “team” is exposing his thought process, the development of the ideas, states that the ideas still need testing, states that his models are a work in process, and says he will soon release a working version in Excel. The CAGW demands that we trust them and send them more of our money. David says “here is what I have done, please verify and/or find fault”.

      142

      • #

        > He, unlike the CAGW “team”

        This is nonsense. The GCMs are documented, via scientific papers; that you’re unable or unwilling to read them is another matter.

        413

        • #
          Rereke Whakaaro

          The GCMs are documented, via scientific papers;

          Not to the level where they can be reconstructed, and independently verified.

          Are you so naive, as to think that people, with access to the required hardware, haven’t tried?

          80

          • #

            The hardware required is trivial; you can run HadCM3 on a desktop or even a laptop. But no: I don’t think people have tried. Just writing the bare dynamical core of a GCM is waay beyond you lot, let alone all the rest. The models are well enough described; what’s lacking is you and yours ability or desire to read the papers.

            116

            • #
              the Griss

              The hardware required is trivial

              I always thought the supercomputers were just a show-off thing. 🙂

              Bet you’d luv to have one, The WC, just for your ego.

              21

            • #
              Rereke Whakaaro

              How many laptop computers have tape drives, I wonder? For that is the media that the raw input data is archived on. I would have though you would have known that.

              20

            • #
              Greg Goodman

              I recall reading a couple of your papers from when you were modelling ocean currents for BAS.

              They generally documented your failure to get anything resembling the actual ocean currents IIRC.

              On that basis you declared yourself an expert on climate and took it upon yourself to be head of thought police at WikiPedia, whilst conveniently forgetting to point out on your profile page that you were also a political candidate for the Green Party in Cambridge.

              Your lack of integrity in the past probably ensures that you will now be ignored even when you make a valid point.

              60

              • #
                michael hart

                Your lack of integrity in the past probably ensures that you will now be ignored even when you make a valid point.

                Correct, Greg. But I still read your replies even if I no longer read most of his comments.

                30

            • #

              Quote notorious WC “Just writing the bare dynamical core of a GCM is waay beyond you lot, let alone all the rest.”

              William, Just what is any (bare dynamical core of a GCM)? Do you mean the line by line extinction coefficients, that have nothing to do with absorbing surface radiation? Or are your “dynamical” the patches and band-aids applied to fake any contribution of CO2?

              10

              • #

                Notorious BIG, that’s me.

                Dynamical core is a basic concept in GCMs. I’m not surprised you don’t know what it is, because (as I’ve said repeatedly) none of you have a clue how GCMs are built. I am surprised that you asked though – but its good that you did.

                There’s a nice page about the new core for the UKMO model at http://www.metoffice.gov.uk/research/areas/dynamics/new-dynamics and there’s the core intercomparison project at https://www.earthsystemcog.org/projects/dcmip-2012/ Or there’s a paper describing the GFDL core: http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI3955.1. That’s one the folk claiming GCMs aren’t documented might usefully try to read, though they’ll find it hard going.

                11

              • #
                Richard C (NZ)

                >”There’s a nice page about the new core for the UKMO model”

                Well, they do need to try something different:

                ‘Laughing Stock Met Office…2007 “Peer-Reviewed” Global Temperature Forecast A Staggering Failure’

                By P Gosselin on 24. Juni 2014

                Frank Bosse at Die kalte Sonne here puts the spotlight on a global warming forecast published by some British MetOffice scientists in 2007. It appeared in Science here.

                The peer-reviewed paper was authored by Doug M. Smith and colleagues under the title: “Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model“.

                […]

                Now that 2007 is some years behind us, even Smith et al have realized their forecast was overinflated and so they produced a new paper which appeared last year. The latest by Smith has taken natural variability more into account and he is much more careful with prophecy-making. Still, the range of uncertainty the new paper offers makes it “more or less useless”

                […]

                We’ll be revisiting Smith’s newest forecast in about 5 years time. In the meantime we have to ask ourselves if these people will ever learn. Science can take only so much damage.

                See more at: http://notrickszone.com/2014/06/24/laughing-stock-met-office-2007-peer-reviewed-global-temperature-forecast-a-staggering-failure/#sthash.o9CMLxUB.dpuf

                00

              • #
                Greg Goodman

                ” In the meantime we have to ask ourselves if these people will ever learn. Science can take only so much damage.”

                Well the evidence is they are “learning”. The Met. Office was the first and IFAIK still the only modelling group to publish a forecast with little warming.

                Their usual 10y forecast was replaced by a 5y one, indicating they are learning about the ability of their models.

                They now suggest temps will remain flat for the next five years.

                I think that is commendable, reasonable scientific behaviour.

                I was also impressed with the intelligent discussion I was able to have with John Kennedy. The blog format at Climate Etc made it a bit messy and hard to follow but it was an open and useful exchange.

                I did not get the impression I was conversing with a bigot or an alarmist intent on conning the world to “save the planet”.

                I’m sure there are and have been far less objective people working at the Met. Office but I won’t digress into the here.

                So credit where it’s due, the Met Office is the first to come out and publish a realistic, non warming forecast, albeit so far rather short term.

                00

              • #
                Richard C (NZ)

                >”They now suggest temps will remain flat for the next five years.”

                No they don’t. Here’s their latest 2014 decadal forecast page:

                http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc

                In Figure 3, “flat” is at the bottom of their range.

                Middle of their range, about 0.25 C warming by 2019.

                Top of their range, about 0.5 C warming by 2019.

                00

              • #
                Greg Goodman

                Ok, thanks for that. It seems I was being unduly kind to Met Office about their predictions.

                a range of zero to 1.0 k/decade is so large you’d do hard to be wrong.

                They seem to be trying to maintain the top limit of “getting back on course” to full CAGW, which hedging their bets on the low end.

                I suspect well see more clearly defined cooling 2019. I guess they’ll just say they were not far off.

                Maybe they will make another minor adjustment in their 2015 update.

                They can hardly admit they have been wrong, they’ll just slowly inch it around.

                00

          • #
            the Griss

            If David and other scientists predicting a DROP in temperature are right……

            … its going to be HILARIOUS watching the alarmistas trying to weasel their way around it.

            Its funny enough watching their antics trying to show either that there is no levelling out of the GAT,….

            …. and/or trying to make up excuses for it.

            DIVERGENCE is not a friend to models !! 🙂

            51

            • #
              Rereke Whakaaro

              … its going to be HILARIOUS watching the alarmistas trying to weasel their way around it

              Fudge Factors usually work both ways. Let us not forget that in the 1970’s the climate crew were getting the vapours over a coming ice age. It will be “back to plan A, lads”, for them, when plan B fails to deliver as expected.

              10

          • #

            I suspect that your bravado is absent of any real performance. You have no idea of what you say or who you are saying them to. That you say them, demonstrates your ignorance. Your words are totally unresponsive to

            The CAGW demands that we trust them and send them more of our money. David says “here is what I have done, please verify and/or find fault”.

            Prove that you know the real meaning of the words you use. Share with us the documentation and source code for a significant software package you have written. Show us such things are not beyond your ability to produce something that works.

            30

            • #
              Mark D.

              I suspect that your bravado is absent of any real performance.

              Lionell, you only “suspect”? remarkable understatement and restraint.

              20

              • #

                I wanted my post to be fit for mixed company so I held myself back. The full force of my thoughts on the matter might have burned a hole in the space-time continuum. However, I think my point was made.

                I predict that if WC responds, it will be consistent with his countless other comments. There will be nothing new, relevant, or even approaching what David has already posted. He uses words as weapons and not as tools of understanding and communication. He has not yet demonstrated that he knows how to program in any language to any degree. How to properly develop a complex system from the top down and make it actually work appears to be way beyond his feeble abilities.

                40

              • #

                My current work is commercial-in-confidence.

                You can see some old stuff at http://www.antarctica.ac.uk/met/wmc/grib-grid.html

                But all this is stupid rhetorical tricks: you only want to put up some barrier to what I’m saying, rather than actually thinking about it.

                My original point remains: the GCMs are documented; what’s lacking is your ability to read the doc.

                14

              • #
                Mark D.

                The accusation of “stupid rhetorical tricks” from a millwright of propaganda!

                Quite delicious. Cotton candy filler I might add, but delicious.

                20

              • #
                Lionell Griffith

                WOW! A Perl script of a few 100 lines. With vanishing little internal and no external documentation. Any script kiddy could have done at least that much in a weekend. It doesn’t convince me that you really understand how to program anything beyond the boringly trivial.

                Are you only an empty braggart, as we think you are, or can you stand and deliver? That is something you have to prove. You are the one demanding respect on this list. EARN IT! We don’t have to prove a damn thing to you until you have.

                Now, how about something of substance? Something well in excess of 1,000 lines of executable code that does something more than the trivial. Something difficult enough that it actually requires a technical understanding of the development process, careful planning, rigorous design, and impeccable implementation. Have you done at least one that meets that requirement? Or is all that you have done is the scramble code of the kind you offer in your example?

                31

    • #
      • #
        Rereke Whakaaro

        So? It has been obvious for some time that reality is biased. Didn’t you know that Reality is a skeptic?

        40

  • #
    the Griss

    The really big thing is what happens next..

    For some reason the GAT (Global Average Temperature) has been going nowhere since the 1998 Elnino

    If the GAT is driven mainly by CO2, it must soon start to go upwards again. (Warm is good) 🙂

    If the GAT is driven mainly by solar, it will very soon start to drop, maybe quite quickly. (Cold is BAD) 🙁

    161

  • #
    the Griss

    Just realised that looking at Figure 2b, the blue line is below the red line by pretty much the same amount as “adjustments” that Steven Goddard points out on his blog.

    That really puts things into perspective. 🙂

    111

    • #
      CC Squid

      “Did Nasa dynamically alter temperatures after 2000?” Ronald Bailey
      http://reason.com/blog/2014/06/23/did-nasanoaa-dramatically-alter-us-tempe

      “Via email, I asked Anthony Watts, proprietor of WattsUpWithThat, what he thinks of Goddard’s claims. He responded…

      …while it is true that NOAA does a tremendous amount of adjustment to the surface temperature record, the word “fabrication” implies that numbers are being plucked out of thin air in a nefarious way when it isn’t exactly the case.

      “Goddard” is wrong is his assertions of fabrication, but the fact is that NCDC isn’t paying attention to small details, and the entire process from B91’s to CONUS creates an inflated warming signal. We published a preliminary paper two years ago on this which you can read here: http://wattsupwiththat.com/2012/07/29/press-release-2/

      About half the warming in the USA is due to adjustments. We’ received a lot of criticism for that paper, and we’ve spent two years reworking it and dealing with those criticisms. Our results are unchanged and will be published soon.

      50

      • #
        the Griss

        About half the warming in the USA is due to adjustments

        We know the USA has been “adjusted”.

        Also we know Australia has been heavily “adjusted”, particularly inland areas which, because of large contributory areas, gives them a large weighting in any Oz average.

        Also NZ.

        What about other places.. Europe, Asia, Russia, China, South America, Africa etc…. has anyone seen any information on how much they get “adjusted” in Giss/Hadley GAT records?

        61

  • #
    Robert

    This is some pretty cool stuff. Even though it’s 3:30 in the afternoon here it is past my bedtime or I’d try and wrap my head around this. I’m just afraid that if I get started I’ll be in here until I have to go to work and I’ve tried the work without sleep thing before, it’s not my idea of fun.

    No matter what the naysayers will come up with you, along with the rest of us, have learned some things. Isn’t that what it is all about?

    190

    • #
      Yonniestone

      Good point Robert, I too look forward to a Renaissance in “learning some things” after this “medieval warmist period”.

      100

  • #
    Manfred

    All up, 504 atmospheric nuclear explosions occurred between 1945 and 1980.

    Think about the timing: At the peak of the sunspot cycle, while the sun is producing its maximum solar irradiation, it turns out that the Sun’s magnetic field is collapsing through its weakest moment. (Marvel at Figure 1 below.) The solar radiation only varies a little through the cycle, but the dynamo of the solar magnetic field is undergoing profound changes — flipping in polarity from North to South or back again. This causes the notch.

    I may be mistaken, but I have seen no comment yet regarding the combined local terrestrial effect of EMP output from the nuclear detonations. Is it not conceivable that we might be looking at a locally induced, more immediate global temperature ‘notch’ in response to these EMP – a cascading change to the atmosphere that leads to a small, brief sag in the temperature?

    Jo states:

    it’s radioactive too (a bit of a cosmic ray effect?)

    As I understand it, the cosmic ray effect amplifies as the solar wind declines and in turn leads to greater ‘seeding’ and cloud formation, thereby to raising of albedo and net cooling. An EMP from a nuclear detonation might conceivably profoundly alter local conditions in a manner that permits greater seeding, eg. the strong EMP pulse that effectively reduces the solar wind by creating a brief, separate mini ionosphere?

    70

    • #
      Manfred

      After all, the Evans (Big) Notch coincides with stellar polarity reversal implying an association of unknown relationship with diminishing TSI.
      Why not a magnitudinal smaller and short term effect on temperature as the result of multiple local EMP’s?

      I know it seems ‘out there’, but it appears as though it may be an effect being ignored (?) similar to the potential changes in planetary ionosphere induction related to stellar polarity changes ?
      What are the implications on the behaviour of the ferromagnet that is the Earth’s iron core ?

      30

  • #
    Andrew

    The “cold war” issue with the model

    Sooty if I’m totally wrong but I can challenge a goldfish at time for memory span, so feel free to laugh.

    For the early years sunspots was used as a proxy for TSI. I cannot remember if this changed for latter years. If not then there is an explanation for the overshoot for the years covered by the nuclear tests. This being issues with the sunspot count itself.

    Such as the Waldmaeier overcount ( which covers the same period ) and many other reported errors, though some may be biased depending on who reports them.

    Force x is challenging. The Svensmark effect doesn’t work because,as I understand it and charts associated with the theory show, as sunspot numbers rise, the effect ( cosmic rays ) fall, though there may be more clouds over the equator where ocean evaporation is greatest.

    Could the apparent albedo change noted at TSI max, actual be an increase in energy absorption by the oceans, due to the spectral change of the energy produced by the Sun. As the sun reaches max there is a change in the frequencies that it generates, an increase in certain UV wavelengths. As UV is absorbed by the ocean far better & to a greater depth than longer wavelengths. Also UV is absorbed by the upper atmosphere. There is a caveat that the UV cycle may not be fully in tune with the sunspot cycle, as one commenter at Tallbloke’s blog mentioned a few days ago.

    Again tear this apart if it is rubbish

    30

  • #
    dp

    The nuclear winter affect is one of those Gomer Pyle “shazam” moments. I was immediately put in mind of Einstein’s Constant (he considered it his greatest blunder). It is plausible but too convenient, not quite like saying “and here a miracle occurs” but close. Going to have to think about these known unknowns.

    I’m thoroughly enjoying the series, btw.

    30

    • #
      the Griss

      I still think it should be shifted backwards a bit to take into account all the crap spewed into the atmosphere during WWII.

      10

      • #
        Yonniestone

        Taking into account of bombs exploded alone in WW2 (sorry quickest link) https://au.answers.yahoo.com/question/index?qid=20110302135357AA9I8CS that’s a lot of crap spewed into the atmosphere. 🙂

        20

        • #
          the Griss

          And all the fires, particularly chemical factories etc. LOTS of crap. !

          20

        • #
          the Griss

          And of course the manufacturing surge and real particulate pollution.

          20

          • #
            dp

            Sounds like we should decarbonize the economy. We could start by outlawing coal… or maybe outlawing war.

            21

            • #
              the Griss

              These particulate and aerosols would have a cooling effect.

              And yes, we should AND ARE regulating particulate matter, just so long as we don’t start doing something REALLY STUPID like decarbonizing the world.

              All life is based almost totally on carbon compounds. Only a bunch of total MORONS would try to limit CO2.

              War, well, what can you do.. They are also MORONS.

              40

            • #
              the Griss

              Particulate matter is dangerous.. that is why we should NOT be forcing up electricity prices with expensive, irregular wind and solar.

              This forces people into burning wood, (Its already happening in Germany and the UK) and releasing a lot of uncontrolled REAL pollution.

              A MUCH BETTER OPTION is the use of coal and gas in highly controlled modern power stations, where basically the only outputs are H2O and the molecule of life, CO2.

              40

          • #
            FIN

            Funny I thought you guys didn’t believe that the human addition of “tiny” amounts of C02 could influence the huge global system. Course not, except if it fits the argument you’re attempting to make.

            020

            • #
              Yonniestone

              Funny how you don’t comprehend what is printed in front of you, tiny amounts of CO2 could have a tiny influence on the huge global system.

              If it’s still a problem I can start with “Dick and Jane”

              100

            • #
              the Griss

              The operative word is TINY.. Thanks for emphasising that point. 🙂

              Are you actually starting to think and learn ? really ????

              30

          • #
            Yonniestone

            True for particulate pollution as wet scrubber systems for smokestacks started being used in the 1970’s initially for removal of sulphur emissions in coal power plants.

            TonyfromOz where are you?

            30

      • #
        ColA

        Hmm not really sure that this is justified – David said they considered the nuclear bombs because they put considerable amounts particulate fine particles very very high into the atmosphere. While the war did generate huge amounts of dust I am not sure it got anywhere near as high up and therefore not inplace to have the same affect. Although I do think about Asian forest fires getting into the jet stream hmmm ??

        20

        • #
          Yonniestone

          Good question, how high can certain particulates sustain that height in that sphere?

          Are we referring to the magnetosphere mostly?

          10

        • #
          Rereke Whakaaro

          I think David is probably correct.

          The atmospheric winds form into layers. I don’t understand why or how, it is just something I was taught, in learning that water vapour forms different types of clouds in different layers.

          Wind speeds are high, at altitude, and winds flows in different directions in different layers, and so dust gets held up by the turbulence, and cannot easily move between layers.

          The blast and radiation from a nuclear explosion has sufficient energy to shred the junctions between the air layers, and that is why the dust gets up there, and why it takes so long to get down again. I guess we are talking 50,000 or 60,000 feet and higher? Not a lot of gravity up there to pull the dust back down again.

          10

  • #
    Ilma

    It’s late where I am (a different time-zone than home) but a couple of thoughts are wondering through my tired head. Is there anything in considering the interplay between the nuclear test ‘fallout’ in the stratosphere and it’s recent temperature trends? Sorry if not quite on topic, but strat temps and trends are another CO2’ist ‘hot’ topic.

    20

  • #
    the Griss

    That peak around 1992 is interesting as well.

    China’s rapid growth and output of aerosols might account for there not being one in the temperature record ??

    30

  • #
    Eddie

    What are all those blood suckers doing hanging around these posts being as good as gold, like they know there’s a feast coming. Aren’t they afraid when the sunlight hits of being scorched to a frazzle ?

    30

  • #
    PeterS

    All this work proves that the AGW propagandists in the scientific community have failed in their duty to conduct real research into climate change. As a consequence they should be disqualified from holding a science position, but of course it won’t happen for a variety of reasons. Perhaps one day.

    41

  • #
    Joe V.

    Forgive me for not keeping up. Is it anything like predictive cancellers used to get rid of echo on telephone lines this notch delay filter, having built one of these with FIR filters about 30 years ago ?

    30

    • #
      David Evans

      No, that’s a different technology, usually using adaptive control I understand — which is active and much more complicated.

      A common use of notch filters is to remove main hum (50 or 60 Hz) from audio equipment. Just takes a handful of resistors, capacitors, and inductors.

      42

      • #
        Bernie Hutchins

        By way of clarification for us audio buffs, as long as you bring it up:

        Sorry, but a notch filter is not used to remove “main hum (50 or 60 Hz) from audio equipment” – at least now successfully (learned the hard way of course). Here’s why:

        If you had 50 or 60 Hz in your audio, it would have to be VERY large in order for you to even notice it. The famous “Fletcher-Munson” curves show a rapidly declining sensitivity on the low frequency end. (Your ears have SOME response down to 15 Hz). It is, rather, the second harmonic (100 or 120 Hz) that constitutes “Hum” and is more commonly a grounding issue. Also remember that it is the second harmonic that leaks residually (ripple) through a full-wave rectified power supply. And you certainly wouldn’t want to use inductors (heavy, bulky) in a notch of that low a frequency, but rather some active filter. I wrote an Audio Eng. Soc. paper in 1982 (on my Electronotes website) on an “Adaptive Delay Comb Filter”, not related to matters here!

        20

  • #
    Eliza

    Trouble with all the BIG NEWS postings is that the global temperature anomalies you are apparently using are probably drivel if based on any of the major organizations NOAA, GISS BEST ect(refer to Steven Goddards site about data manipulations). Have you tried the model against CET or Armagh (probably the only truly representative surface records? (BTW stand to be corrected on the assumption you in fact are using those temperature recordsmentioned above). It is quite likely/possible that there HAS NOT BEEN an increase of 0.8C since 1880. Updated CET shows 0C (may 2014)

    11

    • #

      Eliza, We are very aware of how biased the adjustments seem. You are right, it is a real issue. The model could be in good form, but still produce the wrong predictions because the data is so questionable both for sunspots and temperature.

      CET and Armagh are not global data sets. What choice do we have? FWIW – the turning points seem to survive the adjustments, it’s just the trends that shift. 🙁

      It certainly will affect the accuracy of predictions (though if those results are adjusted too… Sigh).

      See my comment above, there is no global raw dataset. The closest we have is what Frank Lansner did. http://joannenova.com.au/2011/10/messages-from-the-global-raw-rural-data-warnings-gotchas-and-tree-ring-divergence-explained/

      Suggestions welcome. But in the end, the ocean data is so sparse and so short, there is only so much we can do.

      60

      • #
        Greg Goodman

        ICOADS SST is fairly ‘raw’. It almost certainly will have some measurement biases, particularly in the early record when coverage was sparse. That may be better than adding second lot of biases called “bias corrections”.

        HISTALP also have some very long records but the long term variation is more adjustment that real data. They are very secretive with the real data and want stupid money for costs of “extracting” the raw data and signing a non-disclosure agreement.

        Once the data has been adjusted for political correctness, it is freely downloadable.

        I doubt there is any data that has not been manipulated on the scale of 50y trends. I think the only hope of getting any information about climate is to look at <20y variability.

        Having said that, the whole process of demonstrating that a model can be produced without CO2 to explain even the manipulated data is interesting.

        00

  • #
    Sparks

    I’m not convinced, there is something about the “hind-cast” matching 15C +/- (high end) natural variability that does not add up.

    If Carbon dioxide in earths atmosphere can only warm up due to the amount there is of it

    And if Carbon Dioxide in earths atmosphere is being used as a proxy for the temperature record above, why should it match a model between Earth and solar activity?

    Why would this model equate to known flaws? Puzzled!

    14

  • #
    Bob Weber

    Really, really good point Stephen Wilde made at 12:18am. I think the same exactly. The Sun drives climate. The Sun drives weather. The Sun causes warming, cooling, and extreme weather events. The Sun is going to drive the warmists batty as SC24 winds down. It’s time to drive that point home all across the world.

    60

    • #
      King Geo

      Yes you are right Bob Weber – the “warmists” will feel like they are in a “weber” once “judgement day” comes and that day is not far off. It has always been the Sun controlling Earth’s Climate for 4.5 billion years now. But many “Johnny come lately” Homo Sapiens born in the last 100 years can’t seem to grasp this reality. I guess they are exposing their low IQ which is embarrassing if they are scientists who should know better – clearly they didn’t study geology.

      30

  • #
    realist

    David, great work and to be applauded for the courage to be transparent while acknowledging you don’t have all the answers, in sharp contrast to those who do but can’t back up their claims with clear evidence they have everything pinned down (when no one does). It also adds a different dimension to consideration of deficiencies inherent in IPCC models and the adage; the degree of relevance going in is proportional to the limitations on application coming out. I look forward to your next installment.

    I find the nuclear effect interesting from the perspective of dust and other nuclei in the atmosphere, and to what degree it has on climate variability. Is a variable influence on atmosphere (climate) dynamics arising from dust and gaseous aerosols, irrespective of whether the origin is anthropogenic induced sources (nuclear bombs, industrial emissions, agricultural land erosion, “carbon” gases and particulates, etc), or natural (cosmic dust, biological origins, etc), important?

    In an holistic perspective, is the particulate/aerosol component a complex and continuously variable source of nuclei for cloud formation and therefore an influence on albedo and temperature? If so, would that have such a wide degree of variability on geographic, latitude and vertical stratification (let alone time) scales to make it somewhat difficult to model (or there is simply insufficent data available and/or on a long enough time scale to be useful)?

    Is the above important insofar as algorithms in your model, or potential expansion thereof, or relatively irrelevant? I might just have my head too far in the metaphorical clouds (or as my children might say, “dream on, stick to what you know best”)?

    30

  • #
    Eliza

    Jo Thanks for that: There is a HUGE argument going on between certain skeptical sites over temperature adjustments (basically fraud). My bet is on SG… another reason why we cannot use the global datasets (in this case USA)
    just posted
    https://stevengoddard.wordpress.com/2014/06/25/precision-vs-accuracy/

    30

    • #
      Brad

      SG’s blog is apparently being debunked by Anthony Watts, and SG won’t admit error.
      Just got a link to it today but can’t find it now…help?

      10

      • #
        Ross

        Brad

        I think Anthony is just arguing over the words used by Steve. There was a similar debate on this site a couple of days ago when someone referred to the corruption of the data –a poster objected to the word corruption when technically “tampered” or “adjusted” was a better word.
        I’m with Steve on the issue –all he does is some good investigative work and produces some useful “animated” graphs to clearly show the effect of the changes. Very simple, he doesn’t pretend it is highly scientific work –just basic facts. The NOAA etc. forget there are people who keep old data from the official websites or they are clever enough to recover the older pages.

        30

  • #
    Eliza

    Jo Re CET and Armagh the argument is not over global temperature sets. The fact is they probably represent true anomaly changes even if local due to Central British and Irish climates (Rural, extremely stable cloudy and wet mainly with small temp variations). Basically both show 0 change since 1640 (to this date)

    10

  • #
    Richard C (NZ)

    >”The step response of the notch-delay solar model takes about 15+ years to fully respond”

    >”The most important element of the solar model is the delay, which is most likely 11 years (but definitely between 10 and 20 years)”

    Definitely the delay is the most important element. I favour a delay central to the 10 – 20 yr range +/- several years for reasons laid out elsewhere and not for discussion in this thread by me. That’s ongoing in Parts II and IV.

    Thing is, there’s an all-important solar-temperature delay.

    Skeptical Science (and “scientists” apparently) don’t understand this on their contra-solar page:

    http://www.skepticalscience.com/solar-activity-sunspots-global-warming.htm

    “Over the last 35 years the sun has shown a slight cooling trend. However global temperatures have been increasing. Since the sun and climate are going in opposite directions scientists conclude the sun cannot be the cause of recent global warming.

    The only way to blame the sun for the current rise in temperatures is by cherry picking the data. This is done by showing only past periods when sun and climate move together and ignoring the last few decades when the two are moving in opposite directions.”

    No, not “ignored”, understood. See SkS Temperature vs Solar graph:

    http://www.skepticalscience.com/graphics/Solar_vs_temp_500.jpg

    Shift the solar series 14 years to the right. This places the 1986 bicentennial peak in TSI minima at 2000.

    No longer are the sun and climate “going in opposite directions”.

    But atmosphere leads OHC by a country mile:

    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content55-07.png

    The solar-ocean lag is another 14 years or so on from the (apparent) atmospheric response starting around 2000. “Apparent” because there are other factors that distort the picture.

    Therefore, the atmosphere is only now “seeing” peak OHC, 28 years after so-called 1986 “peak” TSI (a misnomer).

    The actual solar Grand Maximum activity peak spans SCs 17 – 23 (1933 – 2008) i.e. 2014 is only 6 years past the end of peak solar activity but the effect of that activity via the oceanic heat sink will still be “seen” by the atmosphere for another 8 years or so.

    List of solar cycles

    http://en.wikipedia.org/wiki/List_of_solar_cycles

    I think this solar delay model is on the right track irrespective of arguments (well, mine anyway) over the nature of the delay. Whether David has captured oceanic lag sufficiently or not will be known over the next 3 years or so but that’s not for this thread either.

    50

    • #
      Richard C (NZ)

      Should be:

      “That’s ongoing in Parts II and [VI]”

      10

    • #
      Mark D.

      RichardNZ

      The SKS statement:

      “Over the last 35 years the sun has shown a slight cooling trend”

      Is in my opinion seriously misleading. The TSI has been pulsing in a progressively increasing amount that puts even recent peaks way over the average. Their statement is I suppose in regard to “average TSI” however, if there is a delay in effect, the peak pulses of TSI is still quite high.

      See: http://www.climate4you.com/images/SolarIrradianceReconstructedSince1610.gif

      It would seem to me that the Earths natural climate moderating “systems” would have a hard time shedding the effect of the pulses and remember the “negative portion of the TSI pulses is not “cooling” as SKS asserts those troughs are still much higher than any other time in the reconstruction and especially post 1900.

      See: http://www.climate4you.com/images/SolarIrradianceReconstructedSince1610.gif

      Warmists are in denial about TSI and this from SKS:

      Since the sun and climate are going in opposite directions scientists conclude the sun cannot be the cause of recent global warming.

      Is probably just not correct.

      20

      • #
        Richard C (NZ)

        Mark #44.2 re SkS solar vs temperature

        Good observations.

        >”The TSI has been pulsing in a progressively increasing amount that puts even recent peaks way over the average.”

        Yes. And within those pulses are much shorter pulses. A rough analogy is the heating effect of mains electricity at 50/60 Hz (per second). Then if there was a per minute oscillation as well analogous to 11 yr periodicity. But the heating effect causing temperature rise in say water in a pot on a stove element is due to the level of power applied i.e. how much you turn up the element dial.

        From your reconstruction graph, the solar dial was turned up from 4.6 (1364.6 W/m2) to 6.1 (1366.1 W/m2 over the period 1900 to 1960. And the high level (6.1) was maintained until about 2005 when it was turned down only slightly (“slight cooling trend” – SkS) to 6 say. Still way above 4.6.

        >Their statement is I suppose in regard to “average TSI”

        Yes but as above, the reduction in TSI has only been “slight” (in SKS terminology), see the average TSI reduction SC 22 to SC 23 here:

        http://nextgrandminimum.files.wordpress.com/2012/11/figure-2-tsi-variations.png

        0.17 W/m2 reduction in average TSI 1986 to 2008.

        >”however, if there is a delay in effect, the peak pulses of TSI is still quite high.”

        Very high as above, even without the delay. The peak pulse early 2014 is relatively weaker though. We don’t yet know how much weaker in terms of TSI but in terms of the 10.7cm Solar Radio Flux index, the SC 24 peak is 27% weaker than SC 23 peak on monthly average of F10.7:

        SC 23, 2001.12, 235.1
        SC 24, 2014.02, 170.3

        See data links here:

        http://www.climateconversation.wordshine.co.nz/2014/06/david-evans-devises-solar-model-to-tame-climate-chaos/#comment-840859

        >”those troughs are still much higher than any other time in the reconstruction and especially post 1900″

        Yes, exactly. And the troughs (minima) are the bicentennial trend. Elsewhere in Parts II and VI I’m trying to demonstrate that the 1955 to 2014 trend in ocean heat accumulation (OHC) corresponds to the bicentennial trend (and deVries cycle) but lagged about 6 decades from when TSI first reached maximum levels in the late 1950s.

        Thus I’m disputing David’s 5 yr time constant for ocean lag.

        Links to that argument downthread here:

        http://joannenova.com.au/2014/06/big-news-part-vii-hindcasting-with-the-solar-model/#comment-1495094

        0 – 700m OHC here:

        http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content55-07.png

        10

  • #
    Eliza

    Brad I only believe in data as a scientist and statistician SG is correct. The data has been totally been manipulated have a look https://stevengoddard.wordpress.com/2014/06/25/precision-vs-accuracy/

    41

  • #
    Eliza

    Anthony has been too much of a softie. Finally the real truth is coming out. He is probably to “politically minded”to handle it. Cannot blame him BTW LOL

    30

  • #
    Sparks

    I agree it is solar activity that runs earths climate, earths interglacial periods and ice-ages. Minor warm and cold periods on a decadal scale do correlate with the sun, and the model is spot-on in showing how the suns variability has the potential to do this empirically.

    It’s a brilliant concept that works.

    60

  • #
    Dry Liberal

    Hi Guys,

    Just trying to get my head around Dr Evan’s work. Here’s my understanding of what he’s done. Let me know if any of this is incorrect.

    1. Assume as a starting point that ”if the recent global warming was associated almost entirely with solar radiation, and had no dependence on CO2, what solar model would account for it?”. In other words, reduce a system with numerous input variables to a system based on the input of a single variable (TSI).

    2. Run the model over the time period of interest.

    3. Compare the model output with the observed global surface temperature record.

    Am I right so far?

    From what I can see, the output of the model of the hypothetical system and the observed record don’t correlate very well. It is then postulated that poor correlation is due to an unidentified component in the actual (real) system, hence the need to introduce something called “Force X”).

    Am I still right?

    My question then, is this: isn’t it more likely that the non-correlation is due to the omission of the other variable/s from the original model rather than an unknown component in the system?

    I guess what it boils down to is that something is missing from either the model or from the understanding of the real system.

    To me, for the model to be plausible more needs to be explained about “Force X” – what is it, how does it work, etc.

    72

    • #

      The association between TSI and Temperatures is not a “non-correlation” at all – it’s been shown by others in different ways to be a delayed correlation, by about 1 cycle or 11 years, which is also what the notching effect in the transfer function suggests. See the references and this post. That’s a fairly suggestive pointer that there is some other force coming off the sun 11 years after the TSI changes. That force may be magnetic, electric field, UV, solar wind…

      I should add, thanks for doing the work to understand this. It is not simple.

      70

      • #
        Dry Liberal

        In the first part of my question/s I used the term “poor correlation” – sorry for using “non-correlation” in the second part. What I meant was that there is not a direct correlation between the two quantities (it’s delayed as you say). If I post further questions I’ll refer to this as “delayed correlation” as you’ve suggested.

        Also, I’m curious as to why I get a “thumbs down” for merely trying to understand the hypothesis?

        110

        • #
          Rereke Whakaaro

          Some of the trolling kiddies give thumbs down out of spite. Don’t take it personally. I never do.

          20

        • #
          the Griss

          Badge of honour !.. Means you has annoyed a troll ! 🙂

          I actually have my own troll followers that nearly always give me a red thumb. 🙂 🙂

          And we know the likes of the WC, and a couple of others, ONLY come here for the red thumbs they get, ..

          …afterall, they have no other purpose here.

          20

        • #
          PhilJourdan

          The onsies and twosies like that mean you are irritating the alarmist trolls. Take it with a grain of pride that you actually are interested in learning, and not repeating their tiresome bleat.

          10

    • #

      Dry Liberal,

      I suppose that in effect the existing GCMs have always impliedly accepted David and Jo’s notch and proposed that force x is CO2 but since there has been no correlation with CO2 levels for the past 18 years that didn’t help much.

      Against that, the climate record obtained for thousands of years via direct temperature measurements plus proxy sources does show a tantalising but imperfect correlation between solar activty and temperatures on time scales involving multiple solar cycles.

      To get a more acceptable fit one just needs to throw the effect of a single solar cycle out of he window (the notch)and work out why a single cycle has little or no effect.

      In my view the effect of a single cycle is swamped by internal system variability AND the delay involved in the oceanic response to global cloudiness changes.

      So, force x (IMHO) is the change in the mix of solar wavelengths and/or particles affecting the vertical structure of the atmosphere so as to affect cloudiness in the way I have described elsewhere.

      The delay then occurs in the ocean response.

      I realise that goes a couple of steps beyond David and Jo’s model which identifies the notch but not the cause, hence force ‘x’ but, so far, I think my proposal is the best currently on the table.

      The thing is, you can’t argue against David and Jo’s model by just reinserting the assumed thermal effect of CO2 as force x. That approach has now failed spectacularly.

      42

  • #
    Bulldust

    Apparently the opening quote is from George E. P. Box:

    http://en.wikiquote.org/wiki/George_E._P._Box

    20

  • #
    Peter Kemmis

    A little typo above Figure 3, which I think everyone adjusted for:

    “Over the period of better TSI data from 1610, the TSI was clearly at a maximum from about 1950 to 2000.”

    10

  • #
    • #
      the Griss

      I wonder if Gore is helping Palmer out of some debt issues. !!

      60

      • #
        Bulldust

        Suddenly had a flashback of season one of Game of Thrones (season one spolier alert)… King Robert gets gored by a boar … is King Palmer about to be bored by a Gore?

        80

    • #
      Yonniestone

      If you lie down with dogs you get up with fleas, thanks for showing your true colors Palmer [Snip! – keep it civil -Fly]!

      70

    • #
      Popeye26

      ROTFLMAO!!

      It’s a marriage made in heaven.

      Ah – the Blimp and the B..ls..t artist together espousing the worldwide need for an ETS.

      I believe they were already feeling the Gore effect in Canberra as the news was going to air. (freezing cold and beginning to snow hahahaha – all the skiers thanking Al for coming down under).

      Cheers,

      40

  • #
    ianl8888

    The “nuclear winter” 1950-1965 or so is definitely a kludge

    But, as Judith Curry pointed out a few weeks ago, the CO2-only GCM models are at long-standing odds about that period too. There is no convincing hypothesis as yet on this period

    20

    • #
      ianl8888

      Dumb of me

      I had meant to add that according to Evans in the preamble to this post, his Excel model to-be-released allows any ratio of the various “forcings” to be plugged in and run
      CO2 80%/TSI 20% and so on for the problematic 1950/60’s, then on for predictive capacity

      Not convinced, but playing with the XLSX spreadsheet for prediction could be fun

      Personally, my current thinking is that climate is a non-linear, coupled mix of a large number of elements, producing a chaos system beyond our Navier-Stokes resolvable limits

      10

  • #
    Richard C (NZ)

    >”the finding of Lockwood & Froehlich in 2007, who showed that four solar indicators including TSI peaked in about 1986 then declined slightly.”

    Yes, sort of. This interpretation of L&F07 disputed in detail, Part VI here:

    http://joannenova.com.au/2014/06/big-news-part-vi-building-a-new-solar-climate-model-with-the-notch-filter/#comment-1494921

    There were 2 peaks, the first and higher level in 1958ish, the second and lower level around 1987ish in terms of “the open solar flux FS from geomagnetic activity data” – L&F07.

    1986 was a minima of SC activity at end of SC 21. The 1986 peak David refers to was only found using PMOD data from 1976 and by smoothing out the solar cycle variations. There was no actual TSI “peak” in 1986 (see Part VI link above).

    A similar TSI “peak” at 1986 is found using a line tracing each SC minimum (the bicentennial trend). How that trend relates to 2014 and beyond is graphed here:

    http://nextgrandminimum.files.wordpress.com/2012/11/figure-2-tsi-variations.png

    10

  • #
    Dry Liberal

    Hi Guys,

    I’m still trying to get my head around Dr Evan’s work. Here’s my understanding so far.

    Essentially, he’s compared two datasets – TSI and Global Temperature in the frequency domain. TSI demonstrates a clear sinusoid (the well-known 11-year solar cycle) while the Global Tempreature record shows no sinusoids meaning that there’s no cyclic pattern detectable in the data.

    A transfer function is then derived.

    The transfer function is basically a mathematical construct that describes how one dataset (an input) can be converted to another (an output).

    It’s then proposed that there is a “Force X” which smoothes or removes the cyclic pattern that should be present in the Global Temperature record.

    What I don’t get is, isn’t it more likely that what you’ve actually shown is that there is no correlation between the two datasets?

    What am I missing here?

    50

    • #
      GabrielHBay

      As I understand it, the point is that by logical deduction there has to be at least a discernable correlation. Thus the search for why ‘the dog does not bark’. A very smart light-bulb moment…

      30

    • #

      Dry Liberal

      It is a graphical representation of the logically obvious.

      We know that a change in TSI should change temperature but on an 11 year time scale it doesn’t.

      So there has to be something offsetting the effect and that something only delays, it doesn’t negate, so the TSI signal then turns up for longer timescales after allowing for the delay. The delay is spread over a period of 3 to 15 years but centres on 11 years which happens to be a single solar cycle.

      That something is then labelled force x until it can be identified.

      The advantage of being able to represent the situation in graphical form is to enable one to play around with the scale and timing of different climate parameters in order to narrow down the nature of force x. We are looking for a physical process that varies in such a way that it produces the necessary pattern. I understand that the method may already cast doubt on cosmic rays as force x.

      Previously, people have just said ‘no correlation’ and left it at that but that obstructs recognition that there an underlying relationship between TSI, temperature and another element or elements of the climate system all operating in a complex interaction.

      The ‘model’ proposed by David allows work to be started on the analysis process in a step by step logical manner. It separates out the ‘signal’ of the missing climate driver (or drivers) in graphical form to give us a start for figuring out what could cause the delayed offset against the known influence of TSI variations.

      Does that make sense?

      32

      • #
        Dry Liberal

        We know that a change in TSI should change temperature

        there has to be at least a discernable correlation

        I don’t see why these two statements are necessarily true – what evidence is there that TSI has the effect that is proposed by Dr Evans? What if the effect of changes in TSI are too small to affect Global Temperature in the manner suggested (Dr Evans suggests that there should be a “corresponding peak in the temperature”)?

        Also, I think there is a logical flaw in the discussion of the concept.

        In “Big News Part II” it is stated that “The peaks only last for a year or two, so the low pass filter in the climate system would reduce the temperature peak to somewhat below 0.1°C.”

        I don’t understand why a low pass filter is mentioned here – you’re actually talking about the characteristics of the transfer function prior to its presentation is Section 3. Is that right? Or is the LPF mentioned in Section 2 different to the transfer function that “is fairly flat, except for the notch around 11 years, and hints of a fall off at the higher frequencies.” It’s not clear whether these two references are to the same thing or different things. This needs to be clarified.

        .

        31

        • #

          Changing TSI must affect temperature as per the S-B equation. Hence there should be a discernible correlation between TSI and temperature even if very small.

          The issue then is as to how far the presence of the atmosphere and all the internal system variables cause surface temperature to vary from S-B.

          I think the rest of your post arises from a problem you seem to have with following David’s language and concepts so I won’t go further at this stage.

          20

          • #
            Richard C (NZ)

            >”Changing TSI must affect temperature as per the S-B equation. Hence there should be a discernible correlation between TSI and temperature even if very small.”

            Yes, the very faint and very minor “fast” response. The fast temperature response to the approx 11 year solar cycle identified all over the globe by Coughlan and Tung (2004). Subsequently by Zhou and Tung.

            As pointed out previously several times.

            00

            • #
              Greg Goodman

              There is a long term correlation SSN/SST, not huge but well into statistically significant.

              http://climategrog.wordpress.com/?attachment_id=958

              The peak is at a lag of about 10y. However, correlation does not prove causation.

              To expect a direct correlation oceans would need to equilibrate in under 10y. Not sure that is realistic.

              In one of the earlier posts D. Evans looked at stationarity arguments and decided it should be dT/dt.

              This is partly correct but neither is d/dt(SST) the full story. If there is a relaxation response, a convolution with and exponential is required. That can then be tested for correlation.

              00

            • #
              Greg Goodman

              Here’s what such a relaxation response would look like.

              http://climategrog.wordpress.com/?attachment_id=975

              Could account for a lot of the long term variability.

              This kind of weighted integration had the properties of a low-pass filter (like all integration) and also produces a shift. The tau=5y used here is approximately right for the shift. 10y is visibly too long. I did not spend to long optimising 😉

              Maybe this would be a better way to achieve a similar result to notch-delay which has a simply physical meaning and avoids non-physical, non-causational notch problems.

              00

        • #
          Richard C (NZ)

          Dry Liberal

          >”I don’t see why these two statements are necessarily true – what evidence is there that TSI has the effect that is proposed by Dr Evans?”

          Yes, I’m disputing that too back in Part II up and down from about here:

          http://joannenova.com.au/2014/06/big-news-part-ii-for-the-first-time-a-mysterious-notch-filter-found-in-the-climate/#comment-1494855

          And also in Part VI, see the link to that in the Part II comment just linked.

          I cite and quote Raspopov et al (2008), that states “climate response to external long-term solar forcing, including the 200-year variation, differs in different regions of the Earth“.

          “Long-term” is >40 years.

          And Waple et al. (2002: “the same solar irradiance variations lead to both positive and negative temperature responses in different regions.

          If this is the effect at >40 years should we expect anything different at <40 years?

          But discussion in Part II and VI I think.

          00

        • #

          emperature fluxuationThe TSI is detectable as t

          00

        • #

          Oops! The TSI is detectable as temperature fluctuation in the lower stratosphere, and 20 meters below ground! only near the surface are they undectable!

          00

    • #
      kim

      DL, consider Xeno: The likelihoods of ‘We know’ and ‘There has to be’ approach necessity.
      ========================

      00

  • #
    DT

    Today Clive Palmer hosted Al Gore ar Parliament House Canberra to announce that the PUPs would support the repeal of carbon tax in return for an emissions trading scheme. A few weeks ago Palmer was caught out dining with Malcolm Turnbull who backed the Kevin Rudd emissions trading scheme. The plot thickens, the wealthy want to make money based on the flawed climate change agenda. Meanwhile the UN-EU emissions trading scheme is collapsing, so what are these deceivers plotting?

    Turnbull cannot be trusted, Palmer cannot be trusted, in my opinion.

    60

    • #
      DT

      Months ago Turnbull, Rudd and family members were caught dining together at a restaurant in China.

      50

  • #
    DT

    Palmer wants the Australian Government to build new RAN ships in Australia, despite high costs well exceeding other supplier countries. He is planning to build a replica of the Titanic in China. This guy has problems.

    20

  • #
    Stephen Richards

    Is the temp data the NOAA/NASA/UKMET adjusted data ?

    30

    • #
      David Evans

      Yes. We used the mainstream temperature datasets (HadCrut, GISSTemp, NCDC, UAH, RSS).

      We use their data to show that a solar solution is viable and possible.

      50

      • #
        Greg Goodman

        Hi David,

        you may like to look at this graph I posted above:

        http://climategrog.wordpress.com/?attachment_id=975

        joannenova.com.au/2014/06/big-news-part-vii-hindcasting-with-the-solar-model/#comment-1495144

        It achieves something similar to notch-delay without the problems of non causality of a notch filter and without need for a separate delay.

        This would probably fit quite closely with what you are doing and be a lot easier to justify physically.

        That was CET which not global, so I’ll put up some global SST data and see how it compares.

        00

        • #
          David Evans

          That’s the low pass filter buy itself, right (time constant 5 years)?

          That’s what we originally started this project with. Went to find it in the frequency domain, but couldn’t. Eventually realized we were looking at a notch, and got the empirical transfer function in Post I.

          Note that there is a low pass filter at the heart of the model, in Post VI.

          David Stockwell got me interested in the climate as a LPF, because he was finding a lot of signs for it (e.g. “Key Evidence for the Accumulative Model of High Solar Influence on Global Temperature.”).

          10

          • #
            Greg Goodman

            Yes, just a tau=5y relaxation, it’s not a low pass filter like a running mean, it’s a relaxation response, ie convolution with an exponential decay.

            Click on the link therein to go to the SST version, it’s shorter but follows a lot more closely.

            It’s a similar weighted average calculation to most FIR filters. There’s a simple script to implement is on climategrog:

            http://climategrog.wordpress.com/category/scripts/

            ( IIRC this is 1/s in Laplace terminology, if you’re used to working in those terms. )

            Since it is a weighted integral it does have low-pass properties but due to the asymmetric kernel it also has a variable phase response and lag. Note the lag depends on frequency and is NOT a delay line.

            I’m not suggesting here that there is a 5y period that I’m trying to remove. I’m suggesting that the expected response of a single reservoir model to a radiative forcing would of this form. I very quickly tried a few values of tau and found 5y about right. Discussions here have cited published research pointing to similar values have been derived empirically.

            This does not sufficiently attenuate the 11y signal and is obviously too simplistic but it seems like a good physical starting point.

            My reading of the overall system is that there’s strong -ve f/b in tropics ( a la Echenbach ) that strongly attenuates most surface radiative forcing: both solar and GHG. ( Less so outside tropics, but tropics are main energy input ).

            There is deeper penetration of shorter wavelengths that bypasses this feedback and are subject to a longer time constant.

            I think these two explain the relatively small 11y signal despite its dominance of SSN thus is in accord with your blackbox result.

            There is a lot of indications of a concurrent 9y variability that many studies claiming 11y are failing to isolate as well as many totally refuting SSN because of phase drift that are equally failing to recognise.

            Too much simplistic analysis and hasty conclusions on both sides.

            I estimate solar and lunar influence to be comparable in magnitude, lunar even stronger at times depending on size of SSN peaks.

            Here it can be seen on the SSN/SST plot:
            http://climategrog.wordpress.com/?attachment_id=984

            00

      • #
        Greg Goodman

        Same thing for hadSST3.

        http://climategrog.wordpress.com/?attachment_id=977

        Generally a very good match to underlying trend.

        As I stated above, I think the Hadley post WWII adjustments are being applied about 10y too soon in relation to other physical evidence and this seems to support that conclusion.

        Although it looks a very nice fit, attention needs to be paid to the divergence at the end.

        Best thing to do I probably to crop it off and blend in some other data that fits [ ref MBH 1998 ] 😉

        The ‘divergence problem’ may relate to the added net incoming SW that increased after El Chichon and more notably after Mt Pinatubo.

        http://climategrog.wordpress.com/?attachment_id=955

        00

      • #
        Roy Hogue

        Yes. We used the mainstream temperature datasets (HadCrut, GISSTemp, NCDC, UAH, RSS).

        We use their data to show that a solar solution is viable and possible.

        It will certainly use their own ammunition against them if the solar model stands up to criticism over time. And that is a good position to be in.

        20

  • #
    Nathan

    “Solar TSI appears to be a leading indicator for some other (probably solar) effect, that we are calling “force X” for now. It that factor, quantified by TSI, was fed into current climate models”

    Should that be ‘If that factor’ ?

    00

  • #
    vukcevic

    The required cooling from the tests is about 0.5°C at its peak in 1963, the year that the USA and the USSR agreed to discontinue atmospheric testing. (If the solar model is too sensitive because the warming of the land thermometer records is exaggerated, then less cooling is required.)

    The AMO (NA SST) appears to be the main contributor (or the cause) for cooling in 1960s onwards.
    This is unlikely to do anything with tests since the Arctic Atmospheric pressure ( a precursor to the NA SST) fell sharply in late 1930s, recovering in 1970-80s, to repeat its sharp fall again in the early 1990s. This would imply an imminent fall in NA SST, if the history were to repeat itself.

    20

  • #
    vukcevic

    correction:
    the (inverted) Arctic Atmospheric pressure ( a precursor to the NA SST)

    10

  • #
  • #
    Roy Hogue

    It’s quantifiable, with a model that approximately hindcasts the observed temperatures. It is not just a concept with handwaving, or a rough one-off computation.

    It’s got physical interpretations for all the parts. This is a physical model, not just curve fitting or an unexplained correlation.

    These are, to me at least, the major important things so far and they have been lacking in climate science to date. But I’ll leave it open that someone knows more about the models and may dispute this point.

    It’s interesting that the solar TSI based model gets close or right on, with or without CO2 included. But there is more to go and I eagerly await the next chapter.

    20

  • #
    Greg Goodman

    “It’s interesting that the solar TSI based model gets close or right on, with or without CO2 included. ”

    Well it would be if it did, but it seems to need a huge fudge factor in the form of a previously undocumented “nuclear winter” and a physically unreal, non-causal notch filter.

    I think the venture is certainly worth pursuing, since IPCC claims that natural forcing only models do not work are based on models tuned to an amplified CO2 forcing where the CO2 is subsequently removed. Then: voila, it does work!

    That it little short of dishonest. If they’d put the same effort in to tweaking their models (and data !!) without CO2 from the beginning they would equally be able to report that adding 3x amplified AGW did not work either.

    If I say dishonest, I’m being kind.

    However, I think the current model proposed here could better be achieved by a relaxation response applied to SSN ( the basic response used by the IPCC to radiative forcing )

    Here it is shown with low-pass filtered SST:
    http://climategrog.wordpress.com/?attachment_id=981

    There is a divergence at the end but accounting for this should be less problematic that the need for a nuclear winter.

    It is more easily justified physically and does not need yet another fix to provide the lag.

    (Neither does it count on abusive use of volcanoes.)

    10

    • #
      Roy Hogue

      Well it would be if it did, but it seems to need a huge fudge factor in the form of a previously undocumented “nuclear winter” and a physically unreal, non-causal notch filter.

      Greg,

      The “nuclear winter” idea isn’t an invention of David Evans, it’s been around a long time and believed by many to be real. What if it is? We’ve been told to believe much more ridiculous things by the climate change worriers.

      The notch filter is also certainly real because good sound math can find it in the existing temperature data. Being able to find it of course, doesn’t explain it but the math behind Fourier analysis has been too well understood for too long to doubt the notch without some very good reason.

      21

      • #
        Greg Goodman

        “….and believed by many to be real.”

        So is CAGW, that is not an argument for it existing.

        The idea of nuclear winter following a nuclear holocaust has been around for a long time. However, suggesting 0.5deg C is actually present in climate from a number of airborne tests is, IMO both new and fanciful.

        No one is questioning the F.A. the problem is what is done with it. I and several others have questioned dividing the spectra like that since you need to sample the whole spectrum. An input which is mainly an 11y spike will always give you a “notch” with this method.

        That is a function of the input, not the response of the ‘blackbox’.

        Unless I’ve missed it, this issue has not been replied to.

        11

        • #
          Roy Hogue

          So is CAGW, that is not an argument for it existing.

          Gerg,

          You are correct. However, your argument all by itself doesn’t look like sufficient grounds to dismiss the nuclear effect either. If we drop the obviously pejorative term, nuclear winter, which has been very much overused and look at the evidence there is for what David incorporated into his model then hopefully we can avoid condemnation of this TSI model until we see all of it. That’s my whole point in all the comments I’ve made, we haven’t seen all of it yet.

          I have no idea how trustworthy any of this is, either nuclear effect or old temperature records, especially since there’s more than enough reason to believe that temperature data sets have been doctored up. But I suspect problems with both. Yet here we are, looking for an explanation that does account for the warming we have good evidence for and for which CO2 is a totally unbelievable cause. Let’s see it all before being its critic.

          00

          • #
            Greg Goodman

            OH , I’d like to see it all. Especially maths rather than verbiage.

            I think the exercise that Dr Evans is doing is very worthwhile, otherwise I would not be discussing it.

            I have suggested a relaxation method may be more physically realistic and easily justified.

            But I don’t need to see the details know that fallout effect that large is fanciful. I’ve already looked for just such an effect my self and to my surprise could not find the slightest trace.

            From nada to something almost a big and whole of 20th c. warming is IMO fanciful.

            Sorry it’s a fudge factor. I think the authors know that already and this should incite them to question the model and see it needs to be done differently.

            There comes a stage when you have to bring in too many adjustments and other factors that you need to look at whether the model is parsimonious.

            00

        • #
          David Evans

          Relied to here, but covered in Post II.

          There were two assumptions: 1. TSI controls temperature. 2. The (TSI in, temp out) system is linear and invariant. Under those conditions, it’s a notch filter at work. Remind you of implications of sinusoids as eigenfunctions in Post II.

          00

      • #

        the math behind Fourier analysis has been too well understood for too long to doubt the notch without some very good reason.

        This^^^^

        10

        • #
          Greg Goodman

          No one is questioning the F.A. the problem is what is done with it.

          This ^^^^

          Actually I was meaning a reply from the authors of the model. I don’t doubt you ability to miss the point and to repeat yourself.

          00

          • #

            You are then left with :

            1. Is the notch spurious or real?

            I have yet to see an argument I consider satisfactory for spurious.

            2. If it is real what is the cause?

            It doesn’t act like any integrator I’m familiar with.

            And just to prejudice the debate: I do EE http://spacetimepro.blogspot.com/

            I would expect all kinds of wiggles between 3 and 20 years (similar to the 3 to 7 year wiggle. A notch is not one of the wiggles I’d expect.

            Looking for the cause of the notch may prove useful. We may even find something unexpected – even if there is no notch (it is an artifact – which is low on my probability scale at this time).

            20

            • #
              Bernie Hutchins

              MSimon asks the two key questions:

              “ 1. Is the notch spurious or real?
              I have yet to see an argument I consider satisfactory for spurious.

              2. If it is real what is the cause?
              It doesn’t act like any integrator I’m familiar with. ”

              **************************************

              First please see my answer to Roy that is below. Also probably Greg will answer for himself but I will give my response now as I am going to be away tomorrow.

              ************************

              (1 – is the notch spurious or real?) The notch has to be considered “non-real” at the moment. (Future installments by David may change that.) As I stated to Roy, since it is non-causal, it is not real in that sense. I don’t know why David made it non-causal and believes he can then fix it with a delay. The delay, which does NOT even solve the causality problem, causes additional complications. Why not START with the much simpler causal filter with no delay needed? Well, he apparently THOUGHT (wrongly) that a notch had to be non-causal.

              But you did actually ask two questions at once (is it spurious or real). It is spurious (at present!) as well as being non-real (non-causal). It is inferred as the ratio of two Fourier transforms, T (temperature) to TSI (solar output). (Need I mention that we don’t know either very well?) Since TSI has a bump up at 1/11-years, David infers a notch between TSI -> T, since T is quite flat. But there would be an inferred notch for any relatively flat spectrum, relative to TSI. So – sorry – spurious until proven otherwise.

              (2. If it is real what is the cause?) Any answer here would be an immense help. The fact that the “notch filter” may not even exist makes speculation on its cause less urgent! Describing (or suggesting) a plausible cause FIRST would be a tremendous boost to suggesting its possible existence. Is the cause “Force X?” I don’t understand what Force-X is supposed to be and/or do, even vaguely, and so far the installments seem to attribute it to the Sun itself (?), or to something on the Earth(?) even biological (?). If you are lost – welcome, as a skeptic should be, to the club.

              Quite frankly, if David has anything, and I sincerely hope he does, it needs to be spectacular! Too many pieces. Too many promises. Summarize first – details later. Science is not a murder novel – you tell who “did it” right in the abstract. Sadly, at the moment at least, it looks like another “Just-So Story”.

              01

      • #
        Bernie Hutchins

        Roy said in part –

        “The notch filter is also certainly real because good sound math can find it in the existing temperature data. Being able to find it of course, doesn’t explain it but the math behind Fourier analysis has been too well understood for too long to doubt the notch without some very good reason.”

        We need to pay attention to standard terminology or we risk misleading others.

        In signal processing, we should not use the term “filter” unless we have reason to believe (such as an obvious electrical network, mechanical linkages, etc.) that there is an input-to-output relationship in place, and we wish to describe this linkage. This would be the meaning of “real” – as an existential “reality”. Math along is probably acceptable at this point. Then there is the issue of “realizablity”, actually making the thing, or observing it working in Nature. This requires, among other things, causality: an arrow of time. It is perfectly proper to consider a non-causal filter to NOT be real.

        The ratio of the magnitudes of two Fourier transforms of two different signals is NOT automatically a filter. It may suggests that a “nuts-and-bolts” filter of some sort COULD be an explanation – especially if some plausible mechanism is presented – otherwise perhaps not so much. David uses the term “Transfer function” for this ratio, which is perhaps a misuse (it should be Laplace instead of Fourier) as “Transfer Function” suggests a real (existing) filter, or established path. If the actual filter were established, the ratio of the two spectra (output divided by input) would be considered a magnitude of a Transfer Function (generally called a “frequency response”).

        Details should not be ignored.

        21

  • #
    vukcevic

    Hi Greg
    Arctic spectrum shows very strong 5 year component (enso or 2x QBO ?)

    It appears when I posted my earlier comment I accidentally clicked on ‘notify me via email’, now my inbox is flooded by links to the article.
    Any idea how to stop it?

    00

    • #
      Greg Goodman

      Hehe, I know what you mean, I thought I was subscribing to follow one comment and got bombed too.

      Go to post a reply then find this at the bottom of the reply box:
      “You are subscribed to this entry. Manage your subscriptions. “

      00

    • #
      Greg Goodman

      Yes QBO keeps popping up all over the system. If it is taken to be about 2.4y , 4*2.4 = 9.6 , that’s the mean frequency of 10.4 and 8.85

      I suspect most of the “quasi-periodics” phenomena are in fact quite periodic mixes of two or three periods.

      Since solar is far from a smooth cosine, it will have fairly strong harmonics.

      Without getting into tenuous interacting oscillators and non-linear effects, this sort of thing must be expected from the just the most obvious linear super position of various forcings.

      00

    • #
      Greg Goodman

      http://climategrog.wordpress.com/?attachment_id=283

      QBO could also arise from 5.26 I found in trade wind data interacting with the annual cycle. That makes it purely solar in origin.

      00

  • #

    Greg, three points:

    i) I see some merit in your idea of a ‘relaxation response’ but am content to go with David unless he thinks your approach could be more accessible to the lay reader.

    ii) You have spotted one of my favoured features of the climate system, namely the way the entire global air circulation reconfigures as necessary to maintain the thermal stability of the system. The QBO and the trade winds amongst other climate phenomena respond directly to solar induced changes in the gradient of tropopause height between equator and poles.

    iii) Don’t worry about David’s reference to the nuclear winter aspect. Just substitute the negative phase of the PDO plus weaker cycle 20 and one doesn’t need it.

    David’s approach is focusing lots of minds on what really matters.

    I share your view of the IPCC claims.

    20

    • #
      ianl8888

      Just substitute the negative phase of the PDO plus weaker cycle 20 and one doesn’t need it

      When Evans releases the XLSX spreadsheet, I would be interested if you could do this

      00

    • #
      Ragnaar

      Just substitute the negative phase of the PDO plus weaker cycle 20 and one doesn’t need it.

      Agreed. In the future, some way to bring in the PDO changes would be nice. It will also be interesting when this model is combined with a climate regimes approach.

      10

  • #
    Greg Goodman

    Well, I’m not sure “lay reader” is a valid means of choosing a model but if you want to look at it that way, warming a pot of water and watching it cool is fairly accessible.

    Notch filters, phase shifts and non-causal responses less so.

    The overall aim is very worthwhile, but I think the graphs I’ve produced show that you can get a lot nearer to the surface record, a lot more easily with a much simpler and physically meaningful model.

    Uncle Occam would like that.

    There are too many things as it stands that just look like an attempt to force a square peg into a round hole.

    I think I’ve provided a way around those problems.

    Hopefully Dr Evans will find it useful. Providing a non GHG model, even if not a perfect fit will be a good counter the false claims of IPCC that only exaggerated AGW fits the data.

    00

  • #

    I’m still worried about a number of issues as follows.

    (1) The lack of evidence that the OFT upon which the solar model rests has been comprehensively tested against/with (or incorporates) the ensemble of all peer reviewed modern papers regarding the comparison between TSI and global mean surface temperature (from at least 1800 to 2013). I particularly mean at the very least the following:

    Stott et al. (2003) Do models underestimate the solar contribution to recent climate change? J. Clim. 16, 4079-4093

    Benestad and Schmidt. (2009) Solar trends and global warming. J. Geophys. Res. 114

    Shapiro et al. (2011) A new approach to to the long-term reconstruction of the solar irradiance leads to large historical solar forcing. Astron. Astrophysics 529, A67

    Steinhilber et al. Total solar irradiance during the Holocene. (2009) Geophys. Res. Lett. 36, L19704

    Wang et al. (2005) Modeling the Sun’s magnetic field and irradiance since 1713. Astophys. J. 625 522 – 538

    (2)I don’t understand why the ‘window’ used was 18.50 to 1978. I don’t agree with the assertion (which was possibly not really relevant) that mean global surface temperatures can be reliably inferred from proxies back to 1613. I would put that limit somewhere between 1700 and 1800 i.e. the period in which the expansion of naval and ship-borne use of thermometers (as opposed to land-based) really took off. What good are proxies unless they are first compared with calibrating data? Note I have published extensively in isotope geochronology.

    (3) The lack of evidence that the inferred TSI record (from 1700) published by Svalgaard has been taken into account. There are very good reasons why this is critical – (a) because the most recent cooling period which may well be a good analogy of where we are at right now (‘proxy’) is the Dalton Minimum, (b) because there are significant issues of doubt about past sun spot counting (as Svalgaard has identified) and (c) because Svalgaard’s TSI reconstruction is a lot more uniform than all the others.

    10

  • #
    Greg Goodman

    “(c) because Svalgaard’s TSI reconstruction is a lot more uniform than all the others.”

    Svalgaard’s TSI is based simply on SSN. Most of the others , like Lean et al, for some add reason add back in an 11y year running mean of SSN underneath the actual SSN.

    Not only is running mean an awful filter, this just seems like double counting to me. It looks like a crude attempt to coerce the TSI data into resembling the surface record.

    Since the relaxation model I used has the basic low-pass quality of the integration, it ends up having a similar profile, without the seemingly spurious double counting of the Lean type TSI reconstructions.

    I really don’t see the justification manipulating TSI in that way. I’m not aware of any reason to add anything to SSN when reconstructing TSI. This seems to be Svalgaard’s basic line.

    10

    • #

      Yes Greg, I am starting to shudder every time I keep reading text references like:

      ‘…. the Lean 2000 TSI reconstruction back to 1610….’

      This is what Prof. Lief Svalgaard now says about Lean et al. 2000 and also Wang et al. 2005:

      ‘In the past 5 years the ‘background’ has slowly disappeared on the radar screen. Even Judith Lean doubts her early work [she was also a co-author of Wang et al 2005]. Slide 15 of http://www.leif.org/research/Does%20The%20Sun%20Vary%20Enough.pdf shows one of Lean’s slide from the SORCE 2008 presentation. Note that she says “longer-term variations not yet detectable – … do they occur? ”

      What has happened is that the Sun has had a very deep minimum comparable to those at the beginning of the 20th century. We would therefore expect that TSI now should also be comparable to TSI around 1900. Reconstructions such as Lean 2000, Wang 2005, and others, that show that TSI in 1900 was significantly lower than today are therefore likely in error.’

      and Svalgaard’s 5/27/10 paper has the following conclusion:

      • Variation in Solar Output is a Factor of Ten too Small to Account for The Little Ice Age,

      • Unless the Climate is Extraordinarily Sensitive to Very Small Changes,

      • But Then the Phase (‘Line-Up of Wiggles’) is Not Right

      • Way Out: Sensitivity and Phases Vary Semi-Randomly on All Time Scales.

      This is what Jeff Glassman says in response:

      Svalgaard is quite right to belittle correlation by the “Line-up of Wiggles”. He could throw in visual comparisons of charts, like Lean’s beautiful map diagrams (Charts 14, 20), or of co-plots of traces (Charts 24, 27). The human eye is easily deceived. Besides, correlation is a mathematical operation leading to a lag-dependent number, hence a function. Correlation needs to be quantified, and neither Svalgaard nor Lean in these references computed the correlation between global average surface temperature and TSI. That is done in my SGW (and in David’s new .

      The key point here is Svalgaard’s second bullet: “Unless the Climate is … Sensitive to … Small Changes”.

      BUT empirical evidence suggests it is

      … dynamic, rather than (or as well as) thermodynamic (Glassman really means non-equilibriumn thermodynamic – see below)

      … engages existing circulation patterns (Hadley, Ferrel and Walter cells) and atmosphere-ocean interactions (ENSO)

      … involves both direct (surface heating) and indirect (stratospheric influence) components.

      The first order effects are two. First, TSI is reduced by its reflection from reactive clouds, hence a powerful positive feedback to solar variations. Second is its absorption, transport, and release by the ocean in its surface layer and through the conveyor belt, made significant by the relative heat capacity of the ocean compared to the atmosphere or land surfaces. The hypothesis is that these effects are what make Earth especially sensitive to TSI variations, and shape the total response of Earth to certain waveforms present in TSI. My (Glassman’s) model satisfies Svalgaard’s criterion, despite Svalgaard’s belief that IPCC’s data are in some sense obsolete. It provides additional processes, specifically albedo and ocean absorption and circulation, for e.g. Lean to add as examples of empirical evidence. As shown using proper correlation techniques, Earth’s climate is twice as sensitive to the solar wind as it is to ENSO.’

      The irony here of course is that Dr Jeff Glassman’s solar model preceded David’s efforts by a good 4 years but received no attention, and is not subsequently acknowledged, simply BECAUSE it included a fairly comprehensive and scientifically rigorous exploration of the 1st order effects.

      Another irony is that all this stuff is also being conducted in the absence of the realization that the global climate system contains significant elements which are non-equilibrium thermodynamics and that, moreover, there is a whole community of scientists who have been studying such effects from the viewpoint of the Maximum Entropy Production (MEP) principle – a principle which has already been elegantly used to rigorously explain some of what we observe on Earth on the other planets and some moons.

      10

  • #

    An issue that hasn’t been dealt with adequately.

    Right or wrong this model predicts cooling and is making a splash.

    It is good that people are being confronted with something other than HOT is the ONLY danger.

    20

  • #
    Richard C (NZ)

    ‘Climate reveals periodic nature, thus no influence by CO2’

    Prof. H. Luedecke and C.O. Weiss

    We reported recently about our spectral analysis work of European temperatures [1] which shows that during the last centuries all climate changes were caused by periodic (i.e. natural) processes. Non-periodic processes like a warming through the monotonic increase of CO2 in the atmosphere could cause at most 0.1° to 0.2° warming for a doubling of the CO2 content, as it is expected for 2100.

    Fig. 1 (Fig. 6 of [1] ) shows the measured temperatures (blue) and the temperatures reconstructed using the 6 strongest frequency components (red) of the Fourier spectrum, indicating that the temperature history is determined by periodic processes only.

    On sees from Fig. 1 that two cycles of periods 200+ years and ~65 years dominate the climate changes, the 200+ year cycle causing the largest part of the temperature increase since 1870.

    The ~65 year cycle is the well-known, much studied, and well understood “Atlantic/Pacific oscillation” (AMO/PDO). It can be traced back for 1400 years. The AMO/PDO has no external forcing it is “intrinsic dynamics”, an “oscillator”.

    Although the spectral analysis of the historical instrumental temperature measurements [1] show a strong 200+ year period, it cannot be inferred with certainty from these measurements, since only 240 years of measurement data are available. However, the temperatures obtained from the Spannagel stalagmite show this periodicity as the strongest climate variation by far since about 1100 AD.

    The existence of this 200+ year periodicity has nonetheless been doubted. Even though temperatures from the Spannagel stalagmite agree well with temperatures derived from North Atlantic sedimentation; and even though the solar “de Vries cycle”, which has this period length, is known for a long time as an essential factor determining the global climate.

    A perfect confirmation for the existence and the dominant influence of the 200+ year cycle as found by us [1] is provided by a recent paper [2] which analyses solar activities for periodic processes.

    The spectrum Fig. 2 (Fig. 1d of [2]) shows clearly a 208-year period as the strongest variation of the solar activity. Fig. 3 (Fig. 4 of [2]) gives us the solar activity of the past until today as well as the prediction for the coming 500 years. This prediction is possible due to the multi-periodic character of the activity.

    The solar activity agrees well with the terrestrial climate. It clearly shows in particular all historic temperature minima. Thus the future temperatures can be predicted from the activities – as far as they are determined by the sun (the AMO/PDO is not determined by the sun).

    The 200+ year period found here [2], as it is found by us [1] is presently at its maximum. Through its influence the temperature will decrease until 2100 to a value like the one of the last “Little Ice Age” 1870.

    The wavelet analysis of the solar activity Fig. 4 (Fig. 1b of [2]) has interesting detail. In spite of its limited resolution it shows (as our analysis of the Spannagel stalagmite did) that the 200+ year period set in about 1000 years ago. This cycle appears, according to Fig. 4, regularly every 2500 years. (The causes for this 2500 year period are probably not understood.)

    [1] Multi-periodic climate dynamics: spectral analysis of long-term instrumental and proxy temperature records. H.Luedecke, A. Hempelmann, C.O.Weiss; Clim. Past. 9 (2013) p 447

    http://www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

    [2] Prediction of solar activity for the next 500 years. F.Steinhilber, J.Beer; Journ. Geophys. Res.: Space Physics 118 (2013) p 1861

    http://www.eawag.ch/forschung/surf/publikationen/2013/2013_steinhilber.pdf

    ‘Claim: Solar, AMO, & PDO cycles combined reproduce the global climate of the past’

    Guest essay by H. Luedecke and C.O.Weiss

    http://wattsupwiththat.com/2013/12/17/solar-amo-pdo-cycles-combined-reproduce-the-global-climate-of-the-past/

    00

    • #
      Richard C (NZ)

      ‘Climate reveals periodic nature, thus no influence by CO2’

      Prof. H. Luedecke and C.O. Weiss

      http://notrickszone.com/2013/12/03/german-scientists-show-climate-driven-by-natural-cycles-global-temperature-to-drop-to-1870-levels-by-2100/

      00

      • #
        Greg Goodman

        Sadly that paper makes one of the most basic errors in Fourier decomposition.

        They derive a DFT Fourier model of a data segment then try to “project” future variations by extrapolation of the Fourier model.

        All that will do is reproduce the same data segment translated forward in time: a rather complicated way to do end-to-end cut-and-paste.

        The dramatic drop they produce is simply the difference between the end and the (repeated) beginning of the series.

        This gets rounded off a bit rounded since they only use about 6 fourier terms IIRC.

        Most of the fourier terms they retain are more to do with cropping of the data ( the time window available ) than the frequency content of the data itself.

        It seems like sloppy peer review is not just for warmists. 😉

        40

        • #
          Greg Goodman

          They should have detrended, applied taper function and then done the DFT. That would have given some information about periodicities upto may 30-35 years reasonably accurately, which probably would have been interesting and may have allowed some speculation about the next decade or two.

          They have a peak around 34 which other data also indicate.

          That also requires some padding of the window to get around the quantisation of the DFT set frequencies which a harmonics of the length of the data.

          The also used “homogenised” HISTALP temperatures which are more biased adjustment [sic] than real data.

          The raw data are a carefully guarded secret but can be seen to have very little long term rise until they are “corrected”.

          00

          • #
            Richard C (NZ)

            Greg #70.1.1.1

            >”That would have given some information about periodicities upto may 30-35 years reasonably accurately, which probably would have been interesting….”

            More than interesting. They may have detected 11 yr periodicity. I was disappointed that Figure 3 M6 and SPA ended at 0.04 because there seemed to be enough sensitivity.

            Figure 5 does show SPA periodicity from about 6 yrs though but nothing I can see at 11 except maybe around 1700 AD.

            I’m convinced that David’s search for 11 yr periodicity has not been exhaustive and that he’s been looking in the wrong places. I’m sure more analysis of localized data such as M6 will identify an 11 yr signal eventually (but already found by C&T04 and Z&T).

            00

            • #
              Richard C (NZ)

              Re #70.1.1.1.1

              >”David’s search for 11 yr periodicity has not been exhaustive and that he’s been looking in the wrong places. I’m sure more analysis of localized data such as M6 will identify an 11 yr signal eventually

              Sure enough:

              ‘Periodicity analysis of NDVI time series and its relationship with climatic factors in the Heihe River Basin in China’

              Huibang Han, Mingguo Ma, Ping Yan, Yi Song (2011)

              Just search the Web for PDF or:

              http://www.researchgate.net/publication/252234442_Periodicity_analysis_of_NDVI_time_series_and_its_relationship_with_climatic_factors_in_the_Heihe_River_Basin_in_China

              3.2 Periodicity analysis of air temperature time series

              The air temperature time series data sets of each pixel of 9 meteorological stations in the Heihe River Basin are analyzed by the EMD. Table 3 shows the periodicity of air temperature from 1982 to 2009.

              Yeniugou IMF3/a 10-11
              Qilian IMF2/a 10-11
              Tuole IMF3/a 10-11
              Shandan
              Zhangye IMF3/a 10-11
              Gaotai
              Jiuquan IMF2/a 10-11
              Dingxin IMF3/a 10-11
              Ejin Banner IMF2/a 10-11

              4 DISCUSSION AND CONCLUSIONS

              It is indicated that the EMD method can be effectively used to analyze the periodic variation of the time series NDVI data. All the time series of SINDVI, air temperature and precipitation have periodic variation from 1982 to 2009 in the Heihe River Basin. The temperature and precipitation are significant driving factor affecting the vegetation cover changes. Furthermore, the periodicity of temperature and precipitation may be affected by air-sea interaction and sunspot activity. The period of 2-3 years is the most elementary cycle of the meteorological element in the world. Period of 5-6, 10-11 and 15-16 years may be concerned with the laws of motions of heaven bodies and the medium-wave cycle of macula, they are all caused by solar activity [29, 30].

              # # #

              7 of 9 stations exhibit 10-11 year periodicity in this set of local data.

              00

              • #
                steven mosher

                Sadly they use GIMMS NDVI.

                00

              • #
                Richard C (NZ)

                >”they use GIMMS NDVI”

                And meteorological data:

                ABSTRACT
                Based on the protensive GIMMS NDVI data set and meteorological data during 1982-2009 in the Heihe River Basin

                00

        • #
          Richard C (NZ)

          Greg #70.1.1

          >”They derive a DFT Fourier model of a data segment then try to “project” future variations by extrapolation of the Fourier model.”

          Not quite. The projection is essentially the 64-yr component according to page 5 prediction section and Fig 5 caption.

          They haven’t got enough of the 250 yr component to extend. The projection would look different if they could.

          00

          • #
            Greg Goodman

            Unless you have a different copy of that paper, you seem to be referring to fig 6 and it’s caption, not fig5.

            Fig 6 is their RM6 fourier model. Looking at their amplitude coeffs in table I don’t see any justification for their comment this is “mainly due to ~65”. The biggest by far is 254y cmpt and most of the rest are about equal.

            It is clear from fig 6 that it is doing nothing more than reproducing the beginning of the series as their ‘projection’.

            If they continued it would faithfully reproduce the the dip and the following peak of 1800 would be at 2050.

            This has no worth at all as predictive model. You could use a decent 30y low-pass filter (not a bloody running mean) , shift the data forwards by 250 years and the result would be near identical.

            That would be laughable as a projection of future temps but that’s what they done, in a rather fancy way that seems to have fooled themselves more than anyone.

            With the sea of garbage that is now polluting the peer reviewed literature this probably does not matter in itself but shows that things are not getting any better.

            Perhaps the one ray of hope is that garbage is now getting published on all sides of the debate. Ten years that was not happening.

            20

            • #
              Richard C (NZ)

              Yes Figure 6 sorry.

              >”shift the data forwards by 250 years”

              It’s not a 250 yr prediction (i.e. they are not making one, you are trying to turn it into something it isn’t). It’s only about 20 yrs of 65-yr component projected with next to no 250 yr component. Same problem around 1760 as at 2000 going back in time.

              >”If they continued it would faithfully reproduce the the dip and the following peak of 1800 would be at 2050″

              But they don’t. And they don’t suggest doing so either (you do). They look instead for other cues, including solar (see end of Discussion).

              The solar paper they look to (outside the paper) is:

              ‘Prediction of solar activity for the next 500 years’

              Steinhilber and Beer (2013)

              http://www.eawag.ch/forschung/surf/publikationen/2013/2013_steinhilber.pdf

              So now a 500 yr prediction develops past 2000:

              “Figure 4. Prediction of solar activity (Φ on the left y axis and total solar irradiance (TSI) on the right y axis) for the next 500 years using the same parameters as for the tests with data of the past.”

              Lagged for temperature 14 yrs say, the trend is down from about now (2014) but with the 65 yr oscillation overlaid on that. I’m inclined to think a lag for ocean heat too so that the temperature fall is not as abrupt or as soon.

              In other words, not simply a repetition of Figure 6 for 2.5 times but close, except there’s 2 path options (Dark and Bright grey bands) and other factors to consider.

              The important thing is that the 1960 to 2000 deVries peak is a standout.

              00

              • #
                Greg Goodman

                It’s not a 250 yr prediction (i.e. they are not making one, you are trying to turn it into something it isn’t).

                I’m not “turning” it into anything I’m stating what they are doing.

                I don’t see why arbitrarily appending a copy of 1750 onto then end of available data segment is supposed to have any predictive ability whatsoever, It matters not whether they paste 20,100 or 250 years, it makes no sense.

                It’s only about 20 yrs of 65-yr component projected with next to no 250 yr component.

                That incorrect, it is the full RM6 fourier model and it will faithfully reproduce the beginning of the sample with all sub 32 year variability removed.

                00

        • #
          • #
            the Griss

            Tamino???… roflmao !!!!

            Next you’ll be linking to the stoat or SkS !!

            Barrel… bottom… !!!

            21

            • #
              Greg Goodman

              Grant “Tamino” Foster’s blog is pretty worthless because he starts laying into people and deletes replies slams the door and hides behind his control of the blog. Open mind , not.

              http://climategrog.wordpress.com/2013/03/11/open-mind-or-cowardly-bigot/

              However, I agree with his Ludeckerous post.

              I had some personal communication with the author pointing out some of these flaws, after some polite discussion where he tried to stick to his guns, he decided not to discuss it any more.

              I guess by that stage the paper was published and he did not want accept its flaws with an unknown correspondent since this could possibility lead to calls to withdraw it.

              One does wonder who reviewed this an what experience they had of this kind of work.

              11

            • #
              PhilJourdan

              He does constantly. He whines about “hostility” while butchering posts all over Wikipedia. He is a work of something, but I would not want it repeated.

              00

        • #
          David Evans

          Yep.

          00

  • #
    Mikky

    David,

    Projections I’ve seen elsewhere on the web of your model look odd, a very sharp drop in temperature.

    To me that drop looks like it might just be because your filter (which you chose to run backwards in time),
    is “falling-off” the end of the TSI data.

    00

    • #
      Greg Goodman

      Oh dear, I hope he has not run up to the end of the data with a half empty buffer.

      Sort of think those who use spreadsheets for d.p. tend to do.

      00

    • #
      Greg Goodman

      Not sure what “chose to run backwards in time” refers to but you need to reverse the kernel if it’s asymmetric, which it will in this case. This is not obvious since it is not necessary for symmetric kernels like lowpass filters.

      I recall David Evens making some comment about it needing to “spin up” that seems odd if is done by convolution. That may indicate he is running with incomplete buffer at beginning and end. That would be a little surprising since he seems fairly familiar with all this.

      00

      • #
        Mikky

        Running the filter backwards in time makes sense for going as far back in history as possible,
        but for best possible predictions of the future you have to filter forwards in time.

        I think David is too hung up on the “notch” being physics,
        life would be much simpler if it were regarded just as a way of removing the 11-year oscillations,
        i.e. smoothing-out the “rapid” fluctuations in TSI.

        10

        • #
          Greg Goodman

          Running anything “backwards in time” in a physical system that has to be governed by causal relationships seems nonsensical to me.

          If the filter itself is non causal I suppose it’s no more ( or less ) valid than running it forwards. But I’m not sure that makes the situation any better.

          00

        • #
          Greg Goodman

          http://climategrog.wordpress.com/?attachment_id=981

          The relaxation model with a single time constant represents a trivial single slab ocean model. This is obvious a naive simplification but already is a good start.

          Looking at the graph suggests the system is less sensitive to faster changes of the 11y cycle.

          My guess is that this comes from strong -ve feedbacks to surface warming in the tropics ( where most of the heat input to the system is ) and a deeper penetration of UV to layers providing longer time constant ( larger thermal mass ) that are not attenuated by the surface feedbacks.

          L Svalgaard made the observation over at WUWT that solar activity increased about 300y ago and the earth has been warming since.

          This simple model even with a realively short tau of 5y provides such long term warming directly from SSN based TSI.

          10

          • #

            Greg said:

            “Looking at the graph suggests the system is less sensitive to faster changes of the 11y cycle.
            My guess is that this comes from strong -ve feedbacks to surface warming in the tropics ( where most of the heat input to the system is ) and a deeper penetration of UV to layers providing longer time constant ( larger thermal mass ) that are not attenuated by the surface feedbacks. ”

            If cloudiness decreases then bear in mind that most solar input at many wavelengths gets past the evaporating layer to varying depths so it doesn’t have to be just UV.

            The strong negative feedback would be ocean absorption until the additional energy retained circulates around the ocean basins before returning to the air. The oceans smear the thermal response over that period of 3 to 15 years that David mentioned.

            20

            • #
              Greg Goodman

              Yes, I was simplifying into IR and UV to make the point but you are correct, this applies progressively across the spectrum.

              The strong negative feedback would be ocean absorption until the additional energy retained circulates around the ocean basins before returning to the air.

              No, the main -ve f/b is probably nearly instantaneous: hours or days at most.

              Evaporation, convection and tropical storms, which grow very rapidly due to internal +ve feedbacks, provide a very strong negative feedback directly to incident surface warming.

              This is something Eschenbach pointed out a couple of years back, though I’m sure the phenomenon was already known, it does not seem to within the scope of GCMs.

              There will also be circulation in the main ocean gyres that which allows the stable tropics to have a buffering influence on extra-tropical zones.

              I did a series of cumulative integrals of degree.days that shows this rather well. Click through the links, there’s a series of four graphs: NH,SH land and sea temps.

              01

              • #

                I agree that changes in evaporation, convection and tropical storms do have a cooling effect as per Willis’s hypothesis (though the concept should be global rather than tropical).

                However, radiative loss to space from condensate, GHGs or particulates higher up is only part of the picture.

                Uplift involves conversion of kinetic energy to gravitational potential energy (GPE) with height which involves cooling. Energy in the form of GPE does not radiate. The higher the radiating molecule the colder it will be and the less it will radiate to space but the more GPE it will carry.

                GPE is then returned to kinetic energy on the descent which is what really keeps the surface warmer than S-B predicts.

                Thus the extra solar energy is not lost as fast as you suggest and it does circulate through all the ocean basins.

                Only a portion is lost to space via radiation from condensate, GHGs or particulates, the system recycles the rest repeatedly through the adiabatic convective cycle for 3 to 15 years until it eventually escapes.

                10

              • #
                Greg Goodman

                Sorry , I forgot link:

                http://climategrog.wordpress.com/?attachment_id=312

                yes, of course storms are not limited to the tropics, well tropical storms are by definition, but there is a notable difference.

                Look at the link.ex-tropics recover to their original temperature, ie despite a reduction in energy input for several years they do not end up cooler.

                Tropics are even more impressive , they even manage to recover the number of degree.days.

                This means that they not only restore thier temperature but make up for the time they were cooler with an equal period of being warmer.

                I suspect the former is largely helped by the ocean gyres, importing cooler water in the east and exporting warmer water in the west.

                Tropics recover quickly and by this exchange help the ex-tropics to recover too.

                00

              • #

                “Look at the link.ex-tropics recover to their original temperature, ie despite a reduction in energy input for several years they do not end up cooler.
                Tropics are even more impressive , they even manage to recover the number of degree.days.
                This means that they not only restore thier temperature but make up for the time they were cooler with an equal period of being warmer.
                I suspect the former is largely helped by the ocean gyres, importing cooler water in the east and exporting warmer water in the west. ”

                That supports my point doesn’t it? The ocean basins just swap energy around.

                The system takes a long time to change from the basic equilibrium and volcanic effects are simply not long lasting or widespread enough. Except maybe for a supervolcanic event.

                Even a single solar cycle disappears into the noise and a change in the proportion of TSI getting into the oceans and making a difference to atmospheric temperature takes multiple solar cycles.

                00

              • #
                Greg Goodman

                “That supports my point doesn’t it? ”

                Yes to the degree that tropics aid ex-tropics.

                It does not support your initial statement of what the strong feedback is:

                The strong negative feedback would be ocean absorption until the additional energy retained circulates around the ocean basins before returning to the air.

                00

              • #

                Greg,

                If your point about a fast tropical response being enough to negate changes in the proportion of TSI reaching the oceans were correct then there would be more clouds not less and more energy immediately escaping to space so that the 11 year delay followed by warming of the atmosphere would not be observed.

                In reality, less clouds lead to more energy into the oceans, the tropical response is not enough to negate it, the energy retained then circulates around the global oceans which creates the observed delay.

                00

              • #
                Greg Goodman

                “In reality, less clouds lead to more energy into the oceans, the tropical response is not enough to negate it”

                That is the tropical response , what are you trying to argue here, is there some undeclared Svensmark effect you are assuming?

                00

            • #
              Richard C (NZ)

              >”The oceans smear the thermal response over that period of 3 to 15 years that David mentioned.”

              Thermal response to what?

              Upper ocean heat (according to NODC that is) lags peak solar (1986 say) by about 28 years:

              http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content55-07.png

              The atmosphere is only just now “seeing” peak ocean heat on a globally averaged basis. On basin-by-basin this is predominantly Indian but there’s minimal decline in the Pacific yet.

              3 to 15 years oceanic lag (9 +/-6) would put the upper ocean heat peak around 1995. There’s no justification for that in the OHC metric.

              00

              • #
                Richard C (NZ)

                >”Upper ocean heat (according to NODC that is)”

                Not discounting the very real possibility, in view of UKMO EN3, that upper ocean heat peaked around 11 years earlier than Josh “too cold” Willis at NODC estimates (i.e. 2003ish, confused by ARGO start).

                That would place the OHC inflexion to peak at about a 3 year lag behind the atmospheric inflexion around 2000.

                So instead of atmospheric temperature trending down from now in 2014 (14 year solar-atmosphere lag in overall peak terms), we will have to wait another 3 years to 2017 (2000 + 14 + 3) on the basis of UKMO EN3 OHC.

                Therefore the next 3 years will test David’s rationale.

                00

              • #
                Richard C (NZ)

                Upper ocean heat: UKMO EN3 vs NODC

                http://bobtisdale.files.wordpress.com/2012/06/figure-2.png?w=640&h=416

                I should put the EN3 peak at 2004, not 2003 i.e 2000 + 14 + 4 = 2018.

                00

  • #
    john robertson

    Very nicely done, I still need repeated reading to get the full thrust, but this is science.
    An idea laid out so all may understand and then put to the test.
    I look forward to the testing.

    00

  • #
    Richard C (NZ)

    I’ve now presented 3 papers demonstrating 11 year periodicity in surface and tropospheric temperature, viz:

    ‘Eleven-year solar cycle signal throughout the lower atmosphere’

    K. Coughlin and K. K. Tung (2004)

    http://onlinelibrary.wiley.com/doi/10.1029/2004JD004873/full

    ‘Periodicity analysis of NDVI time series and its relationship with climatic factors in the Heihe River Basin in China’

    Huibang Han, Mingguo Ma, Ping Yan, Yi Song (2011)

    http://www.researchgate.net/publication/252234442_Periodicity_analysis_of_NDVI_time_series_and_its_relationship_with_climatic_factors_in_the_Heihe_River_Basin_in_China

    ‘Observed Tropospheric Temperature Response to 11-yr Solar Cycle and What It Reveals about Mechanisms’

    JIANSONG ZHOU AND KA-KIT TUNG (2012)

    http://depts.washington.edu/amath/old_website/research/articles/Tung/journals/Zhou_and_Tung_2013_solar.pdf

    All 3 have received a solid ignoring. How many more before tipping point?

    10

    • #

      Cheer up Richard. You clearly make an honest effort to get across a wide cross section of peer-reviewed literature at the details level. What more can be asked? This is more than can be said even for most of the would-be crowd-sourcers – just check out their web sites. Like the warmista blogosphere e.g. Mann, Schmidt, so too does the sceptical blogosphere commonly exhibit crass ‘Club of Dome’ behaviour patterns (from the tediously long lived e.g. Miskolczi to the transient e.g. Salby). Wilful ignorance of good science, generation of outstanding ironies and marginalisation/ignoring of real expertise or simple brilliance e.g. Glassman, Montford is common. Just don’t let it drive you….. errr, wilde.

      00

    • #

      Richard,

      Those papers seem to suggest a meridional shift in citculation patterns to a more zonal form in response to the 11 year solar signal rather than a change in temperature.

      That accords with my New Climate Model.

      It also accords with David’s model in that the meridional shift is instead of a temperature signal and therefore forms part of the delay mechanism.

      “studies point to the existence of a 10- to 12-year oscillation associated with changes in the solar radiation. Because the potential signal associated with the 11-year solar cycle is likely small in amplitude and varies over a relatively long period of time compared with other climate signals with larger variance, it is difficult to detect and even more difficult to prove as being statistically significan”

      from here:

      http://onlinelibrary.wiley.com/doi/10.1029/2004JD004873/full

      So don’t err….. Short change other ideas such as mine and David’s that recognise the absence or near absence of a temperature signal in response to the 11 year cycle. It is the meridional shifting of the global atmosphere and the consequent delay that matters.

      Furthermore, the proper overall solution in my view is to combine the bottom up oceanic process mentioned in those papers with the top down ozone process proposed elsewhere.

      My cotention is that climate change is a consequence of the interaction between the top down and bottom up mechanisms with the ultimate outcome being a stable surface temperature at the expense of global air circulation changes which we perceive as climate change.

      Our emissions of CO2 would be compensated for in exactly the same way but the circulation change would be indiscernible compared to that wrought by natural solar (top down) and ocean (bottom up) variability.

      I am offering a synthesis of all the competing theories which is why my New Climate Model differs from all others.

      01

    • #
      Greg Goodman

      “All 3 have received a solid ignoring. How many more before tipping point?”

      This thread is to discuss David Evan’s model. If you want to discuss presence or not of 11y there were several discussions on just that topic at WUWT this week.

      The bottom line is that most papers finding it cherry pick post-1950 data and ignore earlier period where it does not fit, or were just poorly done. No one came up with any paper showing convincing evidence of an 11y cycle.

      This model is based on the absence of that signal (hence the notch)

      00

    • #
      Richard C (NZ)

      >”I’ve now presented 3 papers demonstrating 11 year periodicity in surface and tropospheric temperature”

      >”All 3 have received a solid ignoring. How many more before tipping point?”

      Here’s #4

      ‘On the relationship between global, hemispheric and latitudinal averaged air surface temperature (GISS time series) and solar activity’

      M.P. Souza Echera,
      E. Echera,
      N.R. Rigozoc,
      C.G.M. Brumd,
      D.J.R. Nordemanna,
      W.D Gonzaleza,
      (2012)

      Table 2
      Significant periods of the air surface temperature.
      Region
      Periods in years

      Global 2–2.8;3.7–6.6;7.7;8.3;9.1;10.4;11.5;20.6;26.3;29.6and65

      Northern Hemisphere
      2.1–2.8;3.1–6.6;7.1;8.3;10.2;11.3;20.4;26.4;54.3and70.4

      Southern Hemisphere
      2–2.6;3.6–5.3;7.7;8.3;9.1;10;11.9;14.2;17.2;20.7and30.8

      241 North–901 North 2–2.7;3.3–5.3;6.2–7.7;8.3;9.9;11.1;12.4;15.2;20.5;26.5;53.1and72.2

      441 North–641 North 2.1–2.8;3.3–5.6;6.3–7.4;9.1–9.9;11.2;12.8;15.4;26.7;53.1and75.6

      241 North–441 North 2–2.7;3–6.4;7.8;8.3;9.1;12.4;14.4;52.7and67.1

      Equator–241 North 2.4–2.8;3–4.6;5.1–7.1;8.2;9;10;11.6;13.4;19.6;25.4;38.4and58.6

      241 North–241 South 2.6–2.9;3.2–6.3;7.1;9;11.8;20;25.8;59.9and63.4

      Equator–241 South 2.5–3.6;4.1–6.3;7.6;9;11.9;20.2;58and61.4

      241 South–441 South 2–3.7;4.2–6.6;7.5;8.3;10.1;12.2;32.9and59.5

      441 South–641 South 2.1–3.8;4.3–6.7;7.7–8.9;10.7;12.8;15.1;21.5;29.4;41.6and98.9

      241 South–901 South 2–3.6;4.7–6.7;11.3;12.7;14.5;17.5;21.1;28.7;34.4and108.7

      http://www.sciencedirect.com/science/article/pii/S1364682611002756

      PDF at Google Scholar

      00

  • #
    Richard C (NZ)

    A solar delay model like David’s is thermodynamically sound (a no-brainer). But:

    1) His N-D prediction turns down markedly in 2014 according to Archibald below (I don’t think so – still might though).

    2) He has an 11 year solar-temperature lag (I don’t think so – 14 years based on the start of the “pause”).

    3) He neglects upper OHC because he says a low-pass filter accounts for it (I don’t think so based on upper ocean OHC peak).

    The following is nominal, rough, and simply to make points-of-distinction in approach.

    To determine lag, start with a 14 year solar-atmosphere lag based on the start of the “pause” (2000) lagging solar peak (1986).

    Then add the 14 year lag to the start of the “pause” (2000) because 2000 is roughly the end of the solar peak range.

    Then add another 4 years lag (based on UKMO EN3 upper OHC peak 2004) or another 14 years lag (based on NODC upper OHC peak 2014) to the initial 14 year solar-temperature lag (1986-2000) and you’ve got a competing solar delay prediction to that shown by Archibald:

    https://quadrant.org.au/wp-content/uploads/2014/06/finnish2.jpg

    Using a basis of: solar => ocean => atmosphere system, solar peak 1986, solar peak range 1960 – 2000.

    Lag to when temperature turns down markedly based on UKMO OHC
    1986 + 14 = 2000 + 14 = 2014 + 4 = 2018

    Lag to when temperature turns down markedly based on NODC OHC
    1986 + 14 = 2000 + 14 = 2014 + 14 = 2028

    The approx 65-yr “cycle” (periodicity identified in literature) in temperature must also be overlaid on the curve produced by transfer from solar => ocean => atmospheric temperature. That will alter the dates of downturn above considerably but the future “cycle” changes of phase are unknown.

    We’ll see what’s basically right by 2018.

    00

    • #
      Richard C (NZ)

      >”solar peak 1986, solar peak range 1960 – 2000″

      Solar curve here:

      https://quadrant.org.au/wp-content/uploads/2014/06/FINNISH4.jpg

      00

    • #

      Richard,

      Don’t get so hung up on exact timing.

      The Earth with its oceans constitutes a very complex system with multiple interacting components. The timing of the final system response from any given change is itself highly variable and is arguably never achieved because whist the system is trying to accommodate one change another change occurs.

      00

      • #
        Richard C (NZ)

        >”Don’t get so hung up on exact timing.”

        Yes agree, that’s why I said “rough”, “nominal”.

        Another starting point, rather than 1986, is the start of the highest solar levels around 1960. That would indicate a delay from the leading edge of solar to the leading edge of temperature of 40 years. If the trailing edge of solar is 2000 then downturn could be expected around 2040.

        There’s a number of approaches and this is my second, but just to demonstrate that there are viable alternatives to David’s.

        All will be tested by time of course.

        00

    • #
      Greg Goodman

      “A solar delay model like David’s is thermodynamically sound (a no-brainer).”

      I think you may find you need to use your brain to make that kind of deduction. If he uses a non causal filter, run backwards, I think there’s very little chance it is “thermodynamically sound”.

      You can’t determine the lag by picking out one point. Do a lag correlation and find the peak correlation.

      If David says he’s using an 11 year lag I suggest you believe him unless you have something more concrete than eye-balling one feature on a graph.

      00

      • #
        Richard C (NZ)

        I’m deducing it from thermodynamic pronciples Greg (yes I’m qualified).

        Energy input (sun) – thermal characteristics of material (ocean, land, air, even ice) – temperature response (lag).

        Go back to Part II and you will find this was David’s system assumption from the outset.

        00

  • #
    Richard C (NZ)

    >”David’s system assumption”

    “We are envisaging some sort of black box, whose input is TSI and whose output is temperature.”

    http://joannenova.com.au/2014/06/big-news-part-ii-for-the-first-time-a-mysterious-notch-filter-found-in-the-climate/

    In reality the black box is materials with diverse thermal properties. Some with more lag than others.

    00

    • #
      Greg Goodman

      That is the basic logical flaw, that leads to the erroneous idea of a notch.

      TSI is _one of_ the inputs, not “the” input.

      SST is (hypothetically at least) the output plus noise.

      There is no reason to assume that LF content in the output is the LF of the input convoluted with the transfer fn. It’s probably not.

      I see no reason to conclude this is a notch rather than a low-pass which would be more logical and easily explained physically.

      There is predominantly 11y input signal that is not present in the output. The most obvious conclusion from that is that the two are not linked to any discernable degree.

      There may be evidence elsewhere but on the I/O analysis I don’t see it.

      00

  • #
    CC Squid

    Please be aware of this post. It indicates that US Temperature records since 1900 have been “gun decked”. Hot temps are hotter in the second half of the 20th century and lower in the first half.

    http://wattsupwiththat.com/2014/06/28/the-scientific-method-is-at-work-on-the-ushcn-temperature-data-set/

    00

  • #
    Don Gaddes

    The BEST report (and other surface temperature modelling) does not take into account the passage of Solar induced orbital ‘Dry’ Cycles,as discovered and predicted by Alex S. Gaddes in his work ‘Tomorrow’s Weather'(1990.)These ‘Dry’ Cycles are longitudinal in scope and move at thirty degrees/ Earth Solar month, with the Westward Solar orbit of the Earth’s Magnetic Field. (Note; prevailing weather moves from West to East,with the Earth’s axial spin.) As these Cycles pass over the various surface temperature stations, they create higher temperatures (‘Dry’ Cycle,) or lower temperatures (‘Wet’/Normal Period between the ‘Dry’ Cycles.)ie, temperature is governed predominantly by Precipitation.

    In the prediction of these ‘Dry’ Cycles,it is of great import that the ‘Sunspot Cycle’ number is accurate. It is not ‘around ten years’ It is not ’11,(or 11.1) years.’
    The number calculated by Alex S. Gaddes is 11.048128 years.

    The current ‘Dry’ Cycle started around 110 degrees East of Prime longitude (circa Beijing,)in mid-February 2014. It will reach Australia in early January 2015 – and last up to Five Years (including the influence of a Lunar Metonic Cycle due in 2016.)

    The ‘Factor X’ would seem to consist of ‘Solar Particles’ emanating from the 27 Day Rotation Rate latitude of the Sun,(the Sunspot Latitude.) Alex S. Gaddes suspected that these particles may be neutrinos.

    These particles seem to effect a break up of the Jet Stream Cloud, and as far as Australia is concerned, the vanguard of the ‘Dry’ Cycles also forces the Southern Lows further South into the Southern Ocean.

    It is noted that these ‘Dry’ Cycles (and their ‘Wet/Normal Period counterparts,) are Longitudinal in nature, and thus affect the Arctic and Antarctic simultaneously.

    Fig. 3:

    Earth’s Period (No. 1 Constant)

    Divided by 4
    (Obliquity, No. 2 Constant) = Quarter Year

    Multiplied by 27
    (Ratio, No. 3 Constant) = 6.75 Years
    (Regional Drought Cycle)

    Multiplied by 11.028148 Yrs
    (Sunspot Wave Frequency,
    No. 4 Constant) = 74.44 Years
    (Quarterly Sub-cycle of a
    Full 297.76 year Sunspot
    Cycle)

    Divided by 4
    (Obliquity,No. 2 Constant) = 18.61 Years
    (Metonic Cycle of
    Moon’s Nodes)

    Multiplied by 27
    (Ratio, No. 3 Constant) = 502.47 Years
    (Full Tree-ring Cycle;
    3 x 167.49 Year Tree-ring
    Sub-cycles. The 167.49 Year
    Sub-cycle is in turn made up
    of 9 x 18.61 Year Metonic
    Cycles of the Moon.)

    Multiplied by 11.028148 Yrs
    (Sunspot Wave Frequency,
    No. 4 Constant) = 5,541.3135 Years
    (Which equals 2 x 2,770.6567
    Year Glacial Cycles,
    See J. Bray, Ref. No. (?)
    Multiplied by 27
    (Ratio, No. 3 Constant) = 37,403.864 Years (The Cycle
    Obliquity of the Earth’s Axis.)

    Multiplied by 11.028148 Yrs
    (Sunspot Wave frequency,
    No. 4 Constant) = 412,495.34 Years (=?)

    Divided by 4
    (Obliquity, No. 2 Constant) = 103,123.83 Years (Precession
    Of *Perihelion and Aphelion)

    *Perihelion: When the Earth is nearest the Sun.

    *Aphelion: When the Earth is furthest from the Sun.

    Revised Solar rotation Rate = 27d

    Fig. 3.

    An updated version of the work ‘Tomorrow’s Weather’ (including ‘Dry’ Cycle forecasts to 2055,) is available as a free pdf from [email protected]

    10

    • #
      Greg Goodman

      Some aspects of this sound interesting, other aspects sound like numerology. I wouldn’t comment further without seeing the author’s account rather than your version of it.

      For goodness sake, if there’s something useful there get it on line and get it read.

      Send me an email and you can have a copy is hardly going to get it wide diffustion.

      BTW neutrinos are particles that react very little with anything at all, so would be one of the least likely choices as a cause of any interaction.

      00

  • #
    Don Gaddes

    The advantage I have Greg, is that I know the forecast ‘Dry’ Cycles have arrived in the past exactly on cue – and I have observed these arrivals (and subsequent effects,) over some years. I have added these observations as an addendum to the original work.
    The numbers represent derivatives of actual known scientific ‘constants.’
    The table I have quoted previously is directly from the original publication.
    If you wish to obtain a copy of the updated ‘Tomorrow’s Weather’,(including the original publication,) send me an email address and I will send you the work. You may then criticise and/or disseminate it as you see fit.
    As for the interaction of Solar particles with the Jet Stream, I direct you to the work of Svensmark, (among others.)

    00

  • #
    Greg Goodman

    Thanks, I have a copy now. I’ll look at it when I have a bit of time.

    “As for the interaction of Solar particles with the Jet Stream, I direct you to the work of Svensmark, (among others.)”

    I’m familiar with Svensmark, who hypothesises solar modulation of GCR, not “solar particles” interaction with the earth’s atmospher.

    00

  • #
    Don Gaddes

    I do not think Svensmark currently has it right – but his line of approach is encouragingly indicative of possible ‘alteration’ of the Jet Stream. Perhaps the definitive connection to neutrinos/’Solar Particles’ is still to be made by him, or others at CERN (or elsewhere). At the moment I don’t know the precise ‘mechanism’ either – but the ‘Dry’ Cycle forecast method contained in ‘Tomorrow’s Weather’ does provide exact outcomes in the prediction of drought conditions planet-wide. The ‘W’ (or Weather Factor)emanating from the Solar ‘Sunspot Latitude’ (postulated by Alex S. Gaddes,) may indeed correlate in some way with the ‘Factor X’ postulated in David Evans’ paper.

    00

    • #
      Greg Goodman

      I have to admit, by the end I was skim reading. It is a very long article that seems to contain a lot of handwaving and numerology and little concrete science.

      Where was ‘Sunspot Latitude’ and it’s link to climate explained?

      BTW why does everything get multiplied by 27? Because it’s a magic number or because of some mathematical relationship. Maybe I missed where that was explained but it seemed arbitrary.

      I know it may or may not be similar to some solar rotational period, but why x27 ? That is not explained.

      00

  • #
    Don Gaddes

    The 27 day Rotation Rate is used because it represents the latitude of the Sun that ‘carries’ the Sunspots, (and hence initiates the journey of the ‘Solar Influence’ (W Factor) that is manifest as a ‘Dry’ Cycle on Earth. The Earth’s Rotation Rate (axial spin) is also used. The accepted Rotation Rate of the Sun is 26.75 days at the equator. “According to Strahler, (Ref. No. 17,) the rotation rate of the Sun differentiates at a slower rate, from lower to higher latitudes.”

    “It seems to me that we ought to be investigating the latitude of the sun which is rotating at the 27 day rate.” (pp 19)

    The link to climate is explained by the fact that the resulting ‘Dry’Cycle forecasts derived from these numbers have proven to be extremely accurate.

    I realise it is a many layered and perhaps difficult work. If you wish to carefully read and make an effort to understand the principles, I am sure you will find it rewarding – otherwise I may not be able to assist you further. In this circumstance, I invite you to await (with the rest of us,) the ‘Dry’ Cycle that will arrive over Australia in early January 2015 – and herald a Dry Period lasting up to Five Years. (see Appendix 2a. pp 104-106)

    00

    • #
      Greg

      “The 27 day Rotation Rate is used because..”

      That does not answer my simple question: I know it may or may not be similar to some solar rotational period, but why x27 ? That is not explained.

      “I realise it is a many layered and perhaps difficult work.”

      The only thing that is difficult is that there is no explanation of why all these mulitplications and divisions. That make it numerology, not science. I read it with the intention of understanding its principals but it does not seem to have any.

      I asked you to point out what I’ve missed and you fail to reply to that, which is a shame, I thought there may be more to it.

      Ian Wilson has published on what appears to be standing waves is SH pressure. I thought this may add something.

      Im particular, the inclination of the cresent moon and its claimed link to precipitation, if that is accurate. How does this relate to the relative position of earth, mmon, sun, declination angles etc. ?

      This does not seem to be explained.

      00

  • #
    Don Gaddes

    Read the work.
    In Chapter 1,you will find exposition on the development of a Gravitational Astronomical and Ratios Principle. The Lunar Metonic Cycle,declination angles etc,are discussed and outlined in Chapter 2 (Fig.7) – and so on. The multipliers of 27 explain the relationship between Solar and Earth Rotation,a ratio of 1:27, (if the 27 day Sunspot Latitude rotation rate is used.) (The Earth rotates once each day.)Whatever the ‘W’ Factor consists of and its subsequent effect on Climate, is dependent on both. If you seriously think Rotation Rates are merely ‘numerology’,you have not grasped the basic tenet of the work.
    I can assist you no further until you have fully considered the contents.

    00

  • #
    Greg

    “The multipliers of 27 explain the relationship between Solar and Earth Rotation,a ratio of 1:27”

    This does “explain” anything. It simply an observation of different rates of ratation.

    What I want to know is WHY the terrestrial year is divided by four then multiplied by 27.

    Until there is an explanation of WHY this is done and HOW this can be expected to have and effect on climate it is numerolgy.

    AFAIKS, there is no “tenet” in the work. It’s just arbitrarily playing with numbers, aka numerology.

    If there was more too it I guess you would have been able to say what and why, so I supose that settles it.

    Thanks for your help and replies.

    00

  • #
    Don Gaddes

    According to your definitions Greg, E=MC squared is merely ‘Numerology’. Why did Einstein multiply Mass by Acceleration? Because it fitted his Hypothesis.

    00

    • #
      Greg Goodman

      According to your definitions Greg, E=MC squared is merely ‘Numerology’. Why did Einstein multiply Mass by Acceleration? Because it fitted his Hypothesis.

      well actually c2 isn’t acceleration 😉

      You seem to be confusing Newton 2nd law with Einstein. I suggest you quit now before digging yourself a deeper hole.

      00

  • #
    Don Gaddes

    Gee Greg, Sorry if I got it wrong!

    00

  • #

    […] “all those bomb tests must have done something” are JN (Jo Nova) / DE (David Evans) in their BIG NEWS series. They’re currently bogged down fighting off LS over TSI, but when that’s beaten to […]

    00