Abstract
P values near 1 are sometimes viewed as unimportant. In fact, P values near 1 should raise red flags cautioning data analysts that something may be wrong with their model. This article examines reasons why F statistics might get small in general linear models. One-way and two-way analysis of variance models are used to illustrate the general ideas. The article focuses on the intuitive motivation behind F tests based on second moment arguments. In particular, it argues that when the mean structure of the model being tested is correct, small F statistics can be caused by not accounting for negatively correlated data or heteroscedasticity; alternatively, they can be caused by an unsuspected lack of fit. It is also demonstrated that large F statistics can be generated by not accounting for positively correlated data or heteroscedasticity.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have