Abstract

Instead of relying on null-hypothesis significance testing (NHST), researchers are consistently advised to use effect sizes (ESs) and confidence intervals (CIs) to convey research findings. However, typical ES measures for most linear models (e.g., multiple regression) assume data normality, a condition that is often violated in behavioral research. This may lead to inaccurate interpretation of ES. In multiple regression models, Cohen’s f 2, R2 are employed by researchers, but no study has systematically evaluated their robustness in practice. Thus, this Monte Carlo simulation study evaluates the robustness of f2 , R2 and the associated CIs based on manipulated levels of sample sizes, magnitudes of ESs, numbers of predictors, and data violations (i.e., heavy-tailed, skewed, contaminated, lognormal, and heteroscedastic distributions of errors). This study offers guidelines regarding how robust these ESs are so that researchers can report the most appropriate ES in their research studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call