Scientists often insist that they follow the data wherever it leads. A study has suggested, however, that their politics may often also guide them.
It found that different teams of social science researchers who were asked to analyse identical data ended up with wildly different results. In the experiment, those differences were closely linked to the researchers’ views on immigration.
The study recruited 158 social scientists, who were organised into 71 teams. They were all given the same large collection of data and asked to answer the same question: do levels of immigration affect public support for welfare programmes?
No two teams produced exactly the same answer. Some concluded that higher levels of immigration weakened support for state-funded welfare schemes. Others found precisely the opposite. Many found little effect at all.
The differences were not random. Before beginning, each researcher was asked about their own views on immigration policy — whether laws should be relaxed or tightened.
Teams with pro-immigration views were consistently more likely to conclude that immigration increased public support for welfare spending. Teams with anti-immigration views tended to find the reverse. Teams with immigration views in the middle found next to no effect.
All had worked with the same material: large, publicly available international surveys from the International Social Survey Programme, which asked people in different countries about their views on government responsibility for welfare provision, including healthcare, unemployment benefits and housing. The surveys spanned from 1985 to 2016, and were combined with country-level measures of immigration, such as the share of foreign-born residents and annual migration flows.
The study found no firm evidence of fraud or obvious manipulation. There was no sign of falsified data or crudely cherry-picked results. Instead, the influence of ideology appeared through judgment calls.
Researchers had to decide which countries to include, which years of data to analyse, how to define immigration, how to combine survey questions and which analytical tools to use. Each choice could be defended on its own. But the study, which has been published in the journal Science Advances, suggested that particular combinations of choices reliably nudged results in one direction or the other.
• Rhys Blakely: If scientists fake evidence, public should get their money back
For Nate Breznau of the German Institute for Adult Education, who co-led the study with Harvard University, the findings reflected broader problems in how research is carried out. Researchers often only have a limited amount of training, he said. They are, like everybody else, prone to confirmation bias, where they gravitate towards answers that match existing beliefs.
Add pressure to publish striking results and the risk of skewed conclusions grows, Breznau argued.
Peer review, he added, was an inadequate safeguard. “Peer review is a poor solution,” he said. “If you give different peer reviewers the same paper, they will come to a huge range of judgments about the paper. The publishing system is in some ways a lottery.”
A better way, he said, would be what he calls “adversarial collaboration”, where researchers who disagree work together from the outset.