Abstract

AbstractThere are two distinct definitions of “P‐value” for evaluating a proposed hypothesis or model for the process generating an observed dataset. The original definition starts with a measure of the divergence of the dataset from what was expected under the model, such as a sum of squares or a deviance statistic. AP‐value is then the ordinal location of the measure in a reference distribution computed from the model and the data, and is treated as a unit‐scaled index of compatibility between the data and the model. In the other definition, aP‐value is a random variable on the unit interval whose realizations can be compared to a cutoff α to generate a decision rule with known error rates under the model and specific alternatives. It is commonly assumed that realizations of such decisionP‐values always correspond to divergenceP‐values. But this need not be so: DecisionP‐values can violate intuitive single‐sample coherence criteria where divergenceP‐values do not. It is thus argued that divergence and decisionP‐values should be carefully distinguished in teaching, and that divergenceP‐values are the relevant choice when the analysis goal is to summarize evidence rather than implement a decision rule.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call