Stephan Lewandowsky is rattled. Not surprisingly. Right now, his blog has gone from a steady run of zero-to-three-comment-posts up to 200, and the skeptics are armed with cutting questions.
But the more he writes, the worse it gets. Skeptics have picked apart his methods, his data, his transparency, and his conclusions. His latest responses are childish taunts with variants of name-calling. What place does an unrelated smear have in a science debate? It’s an effort to distract people.
His paper, in press, has been shown to have a misleading headline, with worthless conclusions based on statistically insignificant number of responses, using a clumsy one-sided test — the aim of which was obvious to most readers. When asked for data he provided answers to 32 questions but still hides the results obtained to a quarter of his original survey, including the basic demographics. He changed the order of questions depending on the blog he sought replies from — effectively putting different versions of the survey up (see below for his explanation). He himself emailed or was named in emails to alarmist anti-skeptic bloggers, while he used an unknown assistant to email skeptical blogs. These non-standard methods were not described in his paper.
It’s an unusual professional demeanor to write as a professor of science in a genre of attempted-parody. The dismissive, puerile efforts to mock those who are seriously dissecting his work are not contributing much to humankind’s knowledge. For a man too busy to answer questions vital to his work, why try his hand at comedy?
Shame about those public funds eh?
The man is supposed to be an expert on the topic of conspiracies, yet can’t define them scientifically, espouses conspiracies himself, and is blind to that because he thinks his conspiracies are proven facts. (Where are all those cheques from Big-oil that have more effect than the billions that are documented as vested interests for the case-for-alarm?) Furthermore, he made the unlikely claim that questions from skeptics about his methodology amounted to proof of “conspiratorial thinking” — despite there being no conspiracy or co-conspirators postulated, just his own incompetent work. If this is how the man defines “conspiracies”, no wonder he has so much trouble writing surveys on the topic. Skeptics asked which blogs he had contacted for his research. He behaved as if it were unreasonable to expect him to back up his statements, or provide emails done for publicly funded work.
UPDATE: A day after we skeptics figured out who four of the five bloggers were, and I updated the page here, Lewandowsky finally gives up the names still claiming (improbably) that he needed special approval to release emails that were never private, and never under an ethical question in the first place. He admonishes skeptics for “outing” his assistant, and says they should have searched their inbox instead — except they did, they searched under “Lewandowsky”, “Oberauer” and “Gignac” (as any rational person would). Lewandowsky still hasn’t explained why he personally contacted the anti-skeptic blogs, but not the key skeptical ones. Stephan claims skeptics do shoddy record keeping, yet he’s the one who didn’t bother searching the internet to find his own survey was hosted by junkscience.
It’s all about the perception
To keep face, and and some semblance of “winning”, he ignores the major flaws in his work, and posts somewhat triumphantly on minor points. The headline result in the paper was, after all, only from four responses from so-called “skeptics” on the moon landing conspiracy, some, or all of which were likely to be fake responses. If he had surveyed the audience he wrote the paper about (instead of asking those who virulently dislike the group in question), it goes without saying (or it ought to) that the results “might” be different.
Despite being a professor of psychology, Lewandowsky was baffled that this was an issue. He seems to think that results would be the same no matter what site the survey was hosted on:
To clarify, this means that participants were recruited from those blogs that posted the link—not those that did not. One might therefore presume that attention would focus on those blogs that provided entry points to the survey, not those that did not, because it is entirely unclear how the latter might contribute to the results of the survey. For example, the website of the British RSPCA also did not post a link to the survey, and neither did the Australian Woolworths website, so how might their non-involvement affect the results? I am keen to hear about potential mechanisms, perhaps we have overlooked something.
So armed with a total sample size of four for the headline point, he mocks the scientists who point out the obvious — that not only is the sample is too small to be meaningful, but even if it was larger, such a skewed non-representative cohort is also a separate and significant problem. If you wanted to understand what motivates Greens, would you post survey’s on One Nation sites?
Posting the survey on anti-skeptic sites was especially a problem when coupled with the ease of entering anonymous fake responses (as Lucia pointed out). It’s not a question of “did anyone fake the response” — the problem is “how many?”
UPDATE: Steve McIntyre tried to ask about the problem of fake questions on Lewandowsky’s blog and has been censored for being inflammatory.
Finally, try to imagine that Lewandowsky has a clinical dispassionate interest his research topic
We would assume he’d be happy to help reach the group he wants to study, and would have been disappointed at the time not to have a larger and remotely representative sample. He seems to be making a specialty about focusing on a very specific group of the population, so you would think he would be keen to foster decent relationships with the few people who have instant access to thousands of the very subjects he professes to be dissecting. Instead Lewandowsky is amused that key skeptic bloggers don’t realize he tried to contact them, and that his research suggestion has sat ignored for two years and that he missed out on getting meaningful results. He taunts skeptics with a hint, but without being helpful enough to give them the essential details they need to search for. He doesn’t provide the subject header, or the name of his assistant who sent the messages:
“Should any others want to continue searching their correspondence, it might be helpful to know that my assistant has just re-read old correspondence from some time ago (e.g., from Thu, 23 Sep 2010 08:38:33 -0400) with considerable amusement in light of the frivolous accusations flying about the internet that we may not have contacted those blogs with a request to post a link.
What they needed to know (and figured out without his help) was the name “Charles Hanich”.
UPDATE: Stephan also bizarrely claims he gave skeptics the search keys:
One of the phrases he calls a “key” was “Thanks. I’ll take a look at that”, which is so generic it’s useless. I searched 33k “sent” items and found hundreds of emails with that phrase, he also gave the date Sept 23, but not Sept 6. He didn’t release the keyword that mattered “Hanich” even though the assistants name was on the official email and The truth is that despite his lack of help, skeptics had it almost completely figured out. His honest help would have meant the questions were never asked in the first place.
Why not rearrange the survey questions?
In response to legitimate concerns that he used four different survey urls instead of one common one, the professor responds with babble:
Finally this new friend from Conspirania is getting some legs.
About time, too, I was getting lonely.
Astute readers will have noted that if the Survey ID’s from above are vertically concatenated and then viewed backwards at 33 rpm, they read “Mitt Romney was born in North Korea.”
To understand the relevance of Mr Romney’s place of birth requires a secret code word. This code word, provided below, ought to be committed to memory before burning this post.
So here it is, the secret code. Read it backwards: gnicnalabretnuoc.
Translating the baby-talk, “gnicnalabretnuoc” means counterbalancing, a common technique in studies where surveys are dished up with questions in random or alternate order. But there would be nothing random about giving skeptics one kind of order, and giving alarmists another, if that occurred. Did the Lewandowsky team divide up the surveys so that all four arrangements were split between skeptics and alarmists? It is impossible to say (or replicate) without the details of which survey applied to which link. If he attempted to randomize things manually, isn’t that noteworthy in the methods, and doesn’t it at least deserve a polite explanation on his taxpayer funded blog?
- The Deltoid, Tamino, Mandia and Hot-Topic blogs were sent surveyID=HKMKNF_991e2415 on about August 29th. That survey is on the archive, and starts with 6 questions about free markets.
- Bickmore and Few Things had the surveyID=HKMKNG_ee191483 also about Aug 29, but this one doesn’t seem to be on the archive.
- Steve McIntyre and Marc Morano were sent surveyID=HKMKNI_9a13984. This survey is on the archive, and it starts with questions about how happy you are with life. Likewise Junk Science was sent surveyID=HKMKNI_9a13984&UID=3313891469 (which I presume is the same?)
- Roger Pielke Jr and Roy Spencer were sent surveyID=HKMKNH_7ea60912
tlitb1 asks in comments
I hear that there is a similar phenomena in psychological studies – where the order of questioning is likely to effect the outcome.
If there was a system of randomised surveys used don’t you think it should have been mentioned in the Method section of the paper?
I think we know that Steve McIntyre and the JunkScience site were offered one flavour of survey so I would assume that the other skeptic sites were at least offered the others to mix this up. Is this what happened? This seems a hard to manage randomising technique that can’t be controlled for. Were attempts made to control for balancing the counterbalancing over the cohorts* of “pro-science” and “skeptic” targets?
I’m interested in your theory of counterbalancing. It normally refers to having half the Likert questions order preference with highest preference at the top of the scale and half at the bottom. You apparently mean something different. What do you think it is?
When will we be able to view the different iterations of the survey?
When will we be able to see how many respondents filled out each version?
Why would you send invitations to bloggers to post the survey without attaching your name to it?
Why would you discuss the objectives of a survey with potential respondents while the survey was still in the field?
Why do you not attach numbers of respondents, as is customary, to your discussion of results in your paper?
I shall have a lot more to say soon about the smears and how preposterous they are in a separate post.
PART I Lewandowsky – Shows “skeptics” are nutters by asking alarmists to fill out survey
PART II 10 conspiracy theorists makes a moon landing paper for Stephan Lewandowsky (Part II) PLUS all 40 questions
PART III here Lewandowsky hopes we meant “Conspiracy” but we mean “Incompetence”
PART IV Steve McIntyre finds Lewandowsky’s paper is a “landmark of junk science”
PART V Lewandowsky does science by taunts and attempted parody instead of answering questions
Lewandowsky, S., Oberauer, K., & Gignac, C. E. (in press). NASA faked the moon landing—therefore (climate) science is a hoax: An anatomy of the motivated rejection of science.. Psychological Science.