4 5 6 7

I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, p-values are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one m...

Full description

Bibliographic Details
Main Author: Andrew Gelman
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Published: 2013
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.362.8466
http://www.stat.columbia.edu/~gelman/research/published/murtaugh2.pdf
Description
Summary:I agree with Murtaugh (and also with Greenland and Poole 2013, who make similar points from a Bayesian perspective) that with simple inference for linear models, p-values are mathematically equivalent to confidence intervals and other data reductions, there should be no strong reason to prefer one method to another. In that sense, my problem is not with p-values but in how they are used and interpreted. Based on my own readings and experiences (not in ecology but in a range of social and environmental sciences), I feel that p-values and hypothesis testing have led to much scientific confusion by researchers treating non-significant results as zero and significant results as real. In many settings I have found estimation rather than testing to be more direct. For example, when modeling home radon levels (Lin et al. 1999), we constructed our inferences by combining direct radon measurements with geographic and geological information. This approach of modeling and estimation worked better than a series of hypothesis tests that would, for example, reject the assumption that radon levels are independent of geologic characteristics. I have, on occasion, successfully used p-values and hypothesis testing in my own work, and in other settings I have reported p-values (or, equivalently, confidence intervals) in ways that I believe have done no harm, as a way to convey uncertainty about an estimate (Gelman 2013). In many other cases, however, I believe that null hypothesis testing has led to the publication of serious mistakes, perhaps most notoriously in the paper by Bem (2011), who claimed evidence for extra-sensory perception (ESP) based on a series of statistically significant results. The ESP example was widely recognized to indicate a crisis in psychology research, not because of the substance of Bem’s implausible and unreplicated claims, but