Researchers have found that of the journal papers they sampled, nearly two fifths of their results could not be replicated.
The report, published in Nature Human Behaviour, looked at 21 social science studies published between 2010 and 2015 in the field of science and nature.
Brian Nosek, of the University of Virginia-led researchers, tested the studies by conducting the same experiments, using methods approved by the original authors to test their conclusions.
They increased the sample sizes by approximately a factor of five in an effort to increase accuracy.
Results found that out of the 21 studies, eight of them could not be replicated - that’s 38 per cent. In addition, of the 13 studies whose results they managed to replicate, the effects measured were just half of what had been initially reported - though Nosek believes this is because of the increase in sample size.
In a statement about his finding, he said:
Studies that obtain a significant result are likely to be exaggerations of the actual effect size.
It is widely accepted that it is difficult to make big generalisations about social, scientific or medical topics - any topic - with a small sample size.
These results feed into a much wider scientific trend called the reproducibility crisis. The field of psychology research has taken the brunt of this scepticism, and a 2015 report detailed that just 36 per cent of 97 studies could be replicated.
Richard Klein, who has previously worked with Nosek, responded to the analysis and told Nature:
The emphasis of novel, surprising findings is great in theory. But in practise it creates publication incentives that don’t match the incremental, careful way science usually works.
Some scientists are objecting to the way Nosek reproduced their findings, but one, Will Gervais, agreed that his 2011 study was "downright silly".
He told Vox that the sample size had been "tiny".
It was a really tiny sample size, and barely significant… I’d like to think it wouldn’t get published today.
Nosek also set up a separate experiment called the ‘prediction market’ in which researchers bet on which results could be reproduced and which couldn’t: approximately 61 per cent could be replicated.
"If the original result was surprising, participants report having a sense that it is less likely to be true," Nosek said.
Hence the aphorism that extraordinary claims require extraordinary evidence.