- JoNova - https://www.joannenova.com.au -

Political bias in peer reviewed science

An excellent article in The New Yorker: Is Social Psychology Biased Against Republicans?

It’s an article about the failings of peer review and research design in psychology due to the dominance of one particular political ideology (rather than having a spread more representative of the total population). You won’t be shocked to find there is a dominance of liberal left-leaning views in the profession. The paper it discusses is by Jonathan Haidt and co-authored by our friend Jose Duarte — the psychology PhD candidate and blogger who entertainingly and comprehensively dissected Lewandowsky on his blog: Do we hate our participants?

It will be no surprise that controversial psychology papers (which disagree with the reviewer’s world view)  are usually treated harshly — no matter if the data is as strong. So, thinking of another field we know, what does it mean for research design and peer review when 97% of certified climate scientists hold one world view? (They not only agree on the scientific hypothesis but on the political action as well — and they boast about that?)  What chance does a “controversial” paper have? Has anyone done a study on the political diversity of official climate scientists? There are plenty of studies claiming general opposition to climate action is split on political lines.

However much psychology is slowed by its political bias, climate science is surely doubly so. Indeed, the whole field would make a case study. It’s a long but good article in the New Yorker, though I see no solutions to the problem suggested. It needs more than checklists. It needs incentives. Some fields of science are 100% dependent on big-government funds, and if there was also a large sector of independent philanthropic research funds competing, then it would not be so difficult to find independent thinkers who held the big-government world views to the fire.

The New Yorker: Is Social Psychology Biased Against Republicans?   By

Most academic social psychologists are in favor of big-government

A 2012 survey of social psychologists throughout the country found a fourteen-to-one ratio of Democrats to Republicans. But where were the hard numbers that pointed to bias, be it in the selection of professionals or the publication process, skeptics asked?

… Tilburg University psychologists Yoel Inbar and Joris Lammers published the results of a series of surveys conducted with approximately eight hundred social psychologists—all members of the Society for Personality and Social Psychology. In the first survey, they repeated a more detailed version of Haidt’s query: How did the participants self-identify politically? The question, however, was asked separately regarding social, economic, and foreign-policy issues. Haidt, they found, was both wrong and right. Yes, the vast majority of respondents reported themselves to be liberal in all three areas. But the percentages varied. Regarding economic affairs, approximately nineteen per cent called themselves moderates, and eighteen per cent, conservative. On foreign policy, just over twenty-one per cent were moderate, and ten per cent, conservative. It was only on the social-issues scale that the numbers reflected Haidt’s fears: more than ninety per cent reported themselves to be liberal, and just under four per cent, conservative.

When Inbar and Lammers contacted S.P.S.P. members a second time, six months later, they found that the second element of Haidt’s assertion—that the climate in social psychology was harsh for conservative thinkers—was on point. This time, after revealing their general political leanings, the participants were asked about the environment in the field: How hostile did they think it was? Did they feel free to express their political ideas? As the degree of conservatism rose, so, too, did the hostility that people experienced. Conservatives really were significantly more afraid to speak out. Meanwhile, the liberals thought that tolerance was high for everyone. The more liberal they were, the less they thought discrimination of any sort would take place.

Peer reviewers are human, they prefer studies that support their world view.

Perhaps even more potentially problematic than negative personal experience is the possibility that bias may influence research quality: its design, execution, evaluation, and interpretation. In 1975, Stephen Abramowitz and his colleagues sent a fake manuscript to eight hundred reviewers from the American Psychological Association—four hundred more liberal ones (fellows of the Society for the Psychological Study of Social Issues and editors of the Journal of Social Issues) and four hundred less liberal (social and personality psychologists who didn’t fit either of the other criteria). The paper detailed the psychological well-being of student protesters who had occupied a college administration building and compared them to their non-activist classmates. In one version, the study found that the protesters were more psychologically healthy. In another, it was the more passive group that emerged as mentally healthier. The rest of the paper was identical. And yet, the two papers were not evaluated identically. A strong favorable reaction was three times more likely when the paper echoed one’s political beliefs—that is, when the more liberal reviewers read the version that portrayed the protesters as healthier.

More than twenty years later, the University of Pennsylvania marketing professor J. Scott Armstrong conducted a meta-analysis of all studies of peer review conducted since (and including) Abramowitz’s, to determine whether there was, in fact, a systemic problem. He concluded that the peer-review system was highly unfair and discouraged innovation. Part of the reason stemmed from known bias: papers from more famous institutions, for instance, were judged more favorably than those from unknown ones, and those authored by men were viewed more favorably than those by women if the reviewers were male, and vice versa.

One early study had psychologists review abstracts that were identical except for the result, and found that participants “rated those in which the results were in accord with their own beliefs as better.” Another found that reviewers rejected papers with controversial findings because of “poor methodology” while accepting papers with identical methods if they supported more conventional beliefs in the field. Yet a third, involving both graduate students and practicing scientists, showed that research was rated as significantly higher in quality if it agreed with the rater’s prior beliefs. When Armstrong and the Drake University professor Raymond Hubbard followed publication records at sixteen American Psychological Association journals over a two-year period, comparing rejected to published papers—the journals’ editors had agreed to share submitted materials—they found that those about controversial topics were reviewed far more harshly.

Read the whole article in The New Yorker

8.9 out of 10 based on 59 ratings