How reliable is science research? Recently, the validity of science has been questioned. People are worried that scientists’ biases are interfering so much that similar experiments don’t get the same results. However, a new study on a random sample of scientific studies found that the overall bias is actually quite low. And one field isn’t more biased than another. As a whole, science is reliable. However, small studies were found to overestimate the importance of their results. Preliminary and highly cited studies that are published in peer-reviewed journals are also likely to be biased.
Scientists at Stanford University explored biases in all scientific research fields by conducting a meta-meta-analysis. Meta-analyses compare multiple studies on a similar topic; these researchers compared multiple meta-analyses. They compiled 3,042 meta-analyses that encompassed almost 50,000 studies over 22 fields of science. The goal was really to get an idea of the situation across all fields and to have random samples of studies across the board from social science to physics. They recorded the effect size of each study in the meta-analysis. Effect size is used to quantify the difference between two groups, to compare results from different studies. Different biases were searched for that explained a study’s likelihood of overestimating effect sizes.
The major takeaways
You should be more cautious when looking at small, preliminary, and highly cited studies. Small studies were found to have the most exaggerated results. It’s not sure if this is because of selective reporting or differences between small and large studies. Small studies with just a few subjects might have some skew, you need a large sample size to get an accurate idea of what’s going on. Therefore, the study may show a strong effect but not hold true if repeated or with a larger sample size. That is fine for preliminary work, but other scientists and the press might treat these small studies as being true for all cases.
Preliminary studies often demonstrate strong results, while subsequent ones have weaker results. Exciting new findings should be put into context with the existing literature and verified with other studies. Then the result can be trusted more. Surprising or interesting results are often highly cited. However, the effects found in the study could be overestimated. Random variation or statistics may point to an extremely important finding, but then following studies find a less pronounced effect.
Some traits are more likely to make authors biased. Scientists early in their career are inexperienced and have more to gain from risks. They are more likely to be biased. Researchers that had already had a paper retracted or demonstrated other scientific misconduct are more likely to exaggerate their results. They have a personal propensity for misconduct. Those who don’t collaborate much with other researchers were more likely to overestimate the effect of their research. When many collaborators work closely together the bias is not strong because they control each other’s work. US studies report more extreme effects. The reason why is not clear, it is probably due to multiple sociological factors. Small, preliminary studies had an even stronger bias in the social sciences. This effect has increased in recent decades.
Although peer review is constructive in that it imposes standards on scientific research, it also puts emphasis on more interesting results. Non-significant results don’t usually make it into journals and are found more often in books, theses, and personal communications. It is hard to publish in a peer-reviewed journal if you don’t have an interesting effect, though there are journals that specifically publish negative results. It is definitely important to have results with small effects out there too, they provide a more realistic picture.
Authors that publish a lot and received a lot of citations overall were not very biased. This result is rather reassuring because it means that those who publish more are better scientists. By better scientists, I mean that they accurately and objectively record their results. It was thought that men might take more risks and be more likely to boast more extreme effects. However, men’s and women’s biases were not any different. Some countries or institutes might put more pressure on their researchers to publish more, such as making funding or a higher position dependent on publication. Actually, the pressure to publish isn’t important. In fact, countries with more pressure were less likely to publish overestimated results.
Well, our trust in science is still intact, thank goodness. On a personal scale, every researcher can do their part to accurately record their results, even if the finding isn’t so interesting. Everyone can treat small, preliminary, or highly cited studies with a grain of salt. Journals should also publish more studies with negative results to have comprehensive knowledge available. Science isn’t just about excited results!
Journal ref: Daniele Fanelli et al., 2016. Meta-assessment of bias in science, PNAS