Stanford Law professor Robert MacCoun is endorsing a new scientific approach to minimize bias in studies: blind analysis, where scientists don’t know the results of their research until they’ve completed the study. This ensures researchers will make decisions without knowing whether or not they will help or thwart a hypothesis, reducing confirmation bias.
In an interview with the Stanford Law School, MacCoun notes that a “statistically significant” number of published results in biological, psychological and social sciences are “too good to be true” and don’t hold up under attempts at replication. This is due to confirmation bias, where researchers may conduct the research in favor of a preferred hypothesis.
In a recently published essay in Nature, MacCoun, alongside Saul Permitter, a professor of physics at University of California, Berkeley, writes that blind analysis can minimize this bias. In blind analysis, a computer would “know” the actual results but present the researchers with faulty data, ensuring that researchers would not change their methods depending on how the study is going to conform to their hypothesis.
“Blind analysis ensures that all analytical decisions have been completed, and all programs and procedures debugged, before relevant results are revealed to the experimenter,” the essay reads. “Before unblinding, investigators should agree that they are sufficiently confident of their analysis to publish whatever the result turns out to be, without further rounds of debugging or rethinking.”
Medical researches have claimed that in the case of clinical trials, blind analysts might endanger patients, as the research team wouldn’t be able to intervene if there were problems. MacCoun notes that this problem can be solved by having a monitoring team that knows the actual results while the data analysts remain blinded. Or if researchers find that the costs outweighs the benefits, they can simply choose to not use blind analysis. While MacCoun urges funding agencies to offer grants to encourage researchers to incorporate blind research methods, he doesn’t believe it should be mandated, only recommended.
“Blind analysis should be subjected to scientific testing like any other ‘treatment,’ so that costs and benefits of new approaches are measured,” MacCoun said. “But we aren’t calling for mandated blind analysis, by any means. We predict that researchers who choose blind analysis will find that their work is seen as more credible.”
Contact Jeremy Quach at jquach ‘at’ stanford.edu.