Abstract

Abstract Currently, most of the empirical management, marketing, and psychology articles in the leading journals in these disciplines are examples of bad science practice. Bad science practice includes mismatching case (actor) focused theory and variable-data analysis with null hypothesis significance tests (NHST) of directional predictions (i.e., symmetric models proposing increases in each of several independent X’s associates with increases in a dependent Y). Good science includes matching case-focused theory with case-focused data analytic tools and using somewhat precise outcome tests (SPOT) of asymmetric models. Good science practice achieves requisite variety necessary for deep explanation, description, and accurate prediction. Based on a thorough review of relevant literature, Hubbard (2016) concludes that reporting NHST results (e.g., an observed standardized partial regression betas for X’s differ from zero or that two means differ from zero) are examples of corrupt research. Hubbard (2017) expresses disappointment over the tepid response to his book. The pervasive teaching and use of NHST is one ingredient explaining the indifference, “I can’t change just because it’s [NHST] wrong.” The fear of submission rejection is another reason for rejecting asymmetric modeling and SPOT. Reporting findings from both bad and good science practices may be necessary until asymmetric modeling and SPOT receive wider acceptance than held presently.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call