6 jun 2011

Bullshit Fields (11)

I've read this paper before and have gone back to it recently. It makes a plausible case for the seemingly counterintuitive argument that "most research findings are false" in science. A combination of factors, including researcher bias, small sample sizes, the role of pure chance, and the fact that the effects measured are often very small, make the majority of scientific findings very questionable.

Let's take a weaker argument out of this. Not that all or even most science is bullshit, but that if the media reports a story about scientific findings, you have to take it cum grano salis. Part of the argument is that the hotter the field, in terms of scientific and media interest, the more the likelihood of falsity. The author of this study,Ioannidis, makes the startling claim that "Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias":
For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.

If you read about a study saying cell phones cause cancer, you should look into it a bit more before just believing it. I can't follow every technical detail of the paper, nor do I have the statistical chops to say whether it is wrong or right. If it is wrong too, then it would be another example of a research finding being false!

***

One problem, of course, is that "negative results" are harder to get published. For example, on the front page web site of my university there are stories about people at KU finding a new species of lizard, and about school bullying. Apparently there is a correlation between visits to the school nurse and being the victim or perpetrator of bullying. Why are these news stories? If a team of researchers found no correlation between bullying and visits to the nurse, there would be nothing there to report. It would be surprising, because we would expect bruised knuckles and bloody noses, but it wouldn't be a research finding in and of itself. Or suppose the team of scientists found that a lizard thought to be a separate and new species was not, just the same old lizard we always knew about. That's not too exciting either.

A correlation is not a causation, but a lack of correlation is almost nothing at all, unless it was a definitive reversal of a super well-established correlation. The main bias in science is in proving positive, not negative, results.

1 comentario:

Andrew Shields dijo...

This is a point that Ben Goldacre keeps returning to on his Bad Science site. He argues that all pharmaceutical and medical studies should be registered in a database when they begin. This would mean that negative results would still be recorded, even if they did not get published.