- JoNova - https://www.joannenova.com.au -

Abusing statistics in the name of global warming

UPDATED: Lovejoy has responded (in PDF and in comments. See below.)

I could tell from the headline below this was going to be a candidate for the Top-Ten most vacuous papers. It lived up to expectations, and then some.

“Odds that global warming is due to natural factors: Slim to none”

Is there anyone with the lights on at McGill University or “Climate Dynamics“? Surely ScienceDaily ought to have laughed at the press release and sent it back?

Seriously, people wield the magic wand of “statistical significance” without realizing that a/it isn’t magic, and b/ tiny p values can still mean nothing. (Depends on the hypothesis and assumptions underneath, hmm?). LoveJoy looked at 4500 years of a very squiggly line (the last 5% of this graph) and pronounced his magic tool could tell whether the last wiggle was …. ahem, unnatural. If that looks like tea-leaf reading to you, join the club.

Modern climate science can predict virtually none of the spikes and wiggles on this graph. Note the graph doesn’t include the last 100 years which adds about 1C to the rise.

Don’t look now, but it’s another Nail In the Deniers Coffin.

Sciencedaily

Odds that global warming is due to natural factors: Slim to none

Date: April 11, 2014
Source: McGill University
Summary: An analysis of temperature data since 1500 all but rules out the possibility that global warming in the industrial era is just a natural fluctuation in the earth’s climate, according to a new study.

An analysis of temperature data since 1500 all but rules out the possibility that global warming in the industrial era is just a natural fluctuation in the earth’s climate, according to a new study by McGill University physics professor Shaun Lovejoy.

Judge the calibre of the man by his, er…  name-calling:

“This study will be a blow to any remaining climate-change deniers,” Lovejoy says. “Their two most convincing arguments – that the warming is natural in origin, and that the computer models are wrong – are either directly contradicted by this analysis, or simply do not apply to it.”

And we all know that he conducted his research totally impartially, and if the results had suggested anything else, he’d tell the world “climate-change deniers” were right. Sure.

The actual abstract takes the farce to high art:

Although current global warming may have a large anthropogenic component, its quantification relies primarily on complex General Circulation Models (GCM’s) assumptions and codes; it is desirable to complement this with empirically based methodologies

The one useful thing about this paper is that it admits that GCMs are all the alarmists have and that their case boils down to “assumptions”. But the idea that LoveJoy did anything empirical is boggling.

By using CO2 radiative forcings as a linear surrogate for all anthropogenic effects we estimate the total anthropogenic warming and (effective) climate sensitivity finding: ΔT anth  = 0.87 ± 0.11 K, λ2xCO2,eff=3.08±0.58K .

So I went straight to the Great William Briggs who has indeed, already shed tears over this paper:

Lovejoy Update

To show you how low climatological discourse has sunk, in the new paper in Climate Dynamics Shaun Lovejoy (a name which we are now entitled to doubt) wrote out a trivially simple model of global temperature change and after which inserted the parenthetical words “skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis”. In published comments he also fixated on the word “deniers.” If there is anybody left who says climate science is no different than politics, raise his hand. Anybody? Anybody?

His model, which is frankly absurd, is to say the change in global temperatures is a straight linear combination of the change in “anthropogenic contributions” to temperature plus the change in “natural variability” of temperature plus the change in “measurement error” of temperature. (Hilariously, he claims measurement error is of the order +/- 0.03 degrees Celsius; yes, three-hundredths of a degree: I despair, I despair.)

His conclusion is to “reject”, at the gosh-oh-gee level of 99.9%, that the change of “anthropogenic contributions” to temperature is 0.

Can you see it? The gross error, I mean. His model assumes the changes in “anthropogenic contributions” to temperature and then he had to supply those changes via the data he used (fossil fuel use was implanted as a proxy for actual temperature change; I weep, I weep). Was there thus any chance of rejecting the data he added as “non-significant”?

Is there any proof that his model is a useful representation of the actual atmosphere? None at all. But, hey, I may be wrong. I therefore challenge Lovejoy to use his model to predict future temperatures. If it’s any good, it will be able to skillfully do so. I’m willing to bet good money it can’t.

William Briggs writes a lot more on the theme that statistical significance is highly overrated.

Statistical “significance” works by tossing some data at your model and hoping that, via one of a multitude of mathematical incantations, one of these many parameters turns out to be associated with a wee p-value (defined as less than the magic number; only adepts know this figure, so if you don’t already have it, I cannot tell you).

If you don’t get a wee p-value the first time, you keep the model but change the incantation. There are several, which practically guarantees you’ll find joy. Statisticians call this process “hypothesis testing.” But you can think of it as providing “proof” that your hypothesis is true.

He also explains how “chance” and “random variation” are not actual forces (marvel that this even needs to be said).  There’s a bit of a problem with something known as cause and effect. Our expressions of ignorance are getting mistaken for forcings:

The global temperature (as measured operationally) has certainly changed since the 1800s. Something, or some things, caused it to change. It is impossible—as in impossible—that the cause was “natural random variation”, “chance” or anything like that. Chance and randomness are not causes; they are not real, not physical entities, and therefore cannot be causes.

They are instead measures of our ignorance. All physical and probability models (or their combinations) are encapsulations of our knowledge; they quantify the certainty and uncertainty that temperature takes the values it does. Models are uncertainty engines.

The only evidence weaker than hypothesis tests are raw assertions and fallacies of appeal to authority.

Yes, the climate debate is down to the dregs. Things are so utterly preposterous we recognize these papers are a form of satire. We just wonder when the authors will get the joke?

This is the point, ladies and gentlemen. The sensible folk are all gone from global warming now.

Abstract

Although current global warming may have a large anthropogenic component, its quantification relies primarily on complex General Circulation Models (GCM’s) assumptions and codes; it is desirable to complement this with empirically based methodologies. Previous attempts to use the recent climate record have concentrated on “fingerprinting” or otherwise comparing the record with GCM outputs. By using CO2 radiative forcings as a linear surrogate for all anthropogenic effects we estimate the total anthropogenic warming and (effective) climate sensitivity finding: ΔT anth  = 0.87 ± 0.11 K, λ2xCO2,eff=3.08±0.58K . These are close the IPPC AR5 values ΔT anth  = 0.85 ± 0.20 K andλ2xCO2=1.5−4.5K (equilibrium) climate sensitivity and are independent of GCM models, radiative transfer calculations and emission histories. We statistically formulate the hypothesis of warming through natural variability by using centennial scale probabilities of natural fluctuations estimated using scaling, fluctuation analysis on multiproxy data. We take into account two nonclassical statistical features—long range statistical dependencies and “fat tailed” probability distributions (both of which greatly amplify the probability of extremes). Even in the most unfavourable cases, we may reject the natural variability hypothesis at confidence levels >99 %.

REFERENCE

Lovejoy, S. (2014) Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, Climate Dynamics. DOI 10.1007/s00382-014-2128-2


9.4 out of 10 based on 114 ratings