Putting P Values in Their Place

Although I am not a statistician, I find something very appealing about mathematics and statistics and am pleased when I find a source to help me understand some of the concepts involved. One of these sources intersects with my obsession with politics: Nate Silver’s website fivethirtyeight.com. Yesterday, during a scan of fivethirtyeight’s recent posts, this one by Christie Ashwanden caught my eye: “Statisticians Found One Thing They Can Agree On: It’s Time to Stop Misusing P-Values.”

P values and data in general are frequently on the minds of manuscript editors at the JAMA Network. Instead of just making sure that statistical significance is defined and P values provided, we always ask for odds ratios or 95% confidence intervals to go with them. P values are just not enough anymore, and Ashwanden’s article was really useful in helping me understand why these additional data are needed (as well as making me feel better about not fully understanding the definition of a P value—it turns out I’m not alone. According to another fivethirtyeight article, “Not Even Scientists Can Easily Explain P-Values”). One of the bad things about relying on P values alone is that they are used as a “litmus test” for publication. Findings with low P values but not contextual data are published, yet important studies with high P values are not—and this has real scientific and medical consequences. These articles explain why P values only can  be a cause for concern.

And then there was even more information about statistical significance to think about. A colleague shared a link to a story on vox.com by Julia Belluz: “An Unhealthy Obsession With P-Values Is Ruining Science.” This article a discussed a recent report in JAMA  by Chavalarias et al “that should make any nerd think twice about p-values.” The recent “epidemic” of statistical significance means that “as p-values have become more popular, they’ve also become more meaningless.” Belluz also provides a useful example of what a P value will and will not tell researchers in, say, a drug study, and wraps up with highlights of the American Statistical Association’s guide to using P values.—Karen Boyd