Sunday, December 18, 2016

"Scientific Utopia: II. Restructuring incentives and practices to promote truth over publishability"

"The persistence of false findings can be meliorated with strategies that make the fundamental but abstract accuracy motive - getting it right - competitive with the more tangible and concrete incentive - getting it published. We develop strategies for improving scientific practices and knowledge accumulation that account for ordinary human motivations and self-serving biases...
High demand for limited space means that authors must strive to meet all publishing criteria so that an editor will do the unusual act of accepting the manuscript. As such, success in publishing is partly a function of social savvy of knowing what is publishable, and empirical savvy in obtaining publishable results...
The research must be published to have impact. And yet, publishing is also the basis of a conflict of interest between personal interests and the objective of knowledge accumulation. The reason? Published and true are not synonyms. To the extent that publishing itself is rewarded, then it is in scientists’ personal interests to publish, regardless of whether the published findings are true (Hackett, 2005; Martin, 1992; Sovacool, 2008)...
We have enough faith in our values to believe that we would rather fail than fake our way to success. Less simple to put aside are ordinary practices that can increase the likelihood of publishing false results, particularly those practices that are common, accepted, and even appropriate in some circumstances. Because we have directional goals for success, we are likely to bring to bear motivated reasoning to justify research decisions in the name of accuracy, when they are actually in service of career advancement (Fanelli, 2010a)...
Once we obtain an unexpected result, we are likely to reconstruct our histories and perceive the outcome as something that we could have, even did, anticipate all along – converting a discovery into a confirmatory result (Fischoff, 1977; Fischoff & Beyth, 1975). And, even if we resist those reasoning biases in the moment, after a few months, we might simply forget the details...
Science is self-correcting (Merton, 1942, 1973). If a claim is wrong, eventually new evidence will accumulate to show that it is wrong and scientific understanding of the phenomenon will change. This is part of the promise of science – following the evidence where it leads, even if it is counter to present beliefs (see opening quotation of this article). We do believe that self-correction occurs. Our problem is with the word “eventually.” The myth of self- correction is recognition that once published there is no systemic ethic of confirming or disconfirming the validity of an effect. False effects can remain for decades, slowly fading or continuing to inspire and influence new research (Prinz et al., 2011). Further, even when it becomes known that an effect is false, retraction of the original result is very rare (Budd, Sievert, Schultz, 1998; Redman, Yarandi & Merz, 2008). Researchers that do not discover the corrective knowledge may continue to be influenced by the original, false result. We can agree that the truth will win eventually, but we are not content to wait...
The problem is not that false results get into the literature. The problem is that they stay in the literature. The best solutions would encourage innovation and risk-taking, but simultaneously reward confirmation of existing claims...
Early-career scientists would get useful information from a systematic review of the degree to which publication numbers and journal prestige predict hiring and promotion."
http://arxiv.org/abs/1205.4251


As a scientist in training, I feel like this was a useful and important thing for me to read. The paper is well structured, and I feel really encouraged by the suggestions. They are really, really interesting.

No comments:

Post a Comment