Abstract

This paper presents a brief review of interval-based hypothesis testing, widely used in bio-statistics, medical science, and psychology, namely, tests for minimum-effect, equivalence, and non-inferiority. We present the methods in the contexts of a one-sample t-test and a test for linear restrictions in a regression. We present applications in testing for market efficiency, validity of asset-pricing models, and persistence of economic time series. We argue that, from the point of view of economics and finance, interval-based hypothesis testing provides more sensible inferential outcomes than those based on point-null hypothesis. We propose that interval-based tests be routinely employed in empirical research in business, as an alternative to point null hypothesis testing, especially in the new era of big data.

Highlights

  • The paradigm of point null hypothesis testing has been almost exclusively adopted in all areas of empirical research in business, including accounting, economics, finance, management, and marketing.The procedure involves forming a sharp null hypothesis and using the “p-value less than α” criterion to reject or fail to reject the null hypothesis, or in the Neyman–Pearson tradition, determining whether the test statistic lies in a region defined by α, the test size

  • Model validation or specification tests are often performed based on the paradigm of point null hypothesis testing, for which the null hypothesis is that the model is valid, and the alternative hypothesis is that the model is not valid

  • With this critical value being much larger than the F-statistic of 3.18, the above interval null hypothesis of minimum-effect cannot be rejected at the 5% level, providing evidence that the seasonal affective disorder (SAD) economic cycle is economically negligible in the U.S stock market

Read more

Summary

Introduction

The paradigm of point null hypothesis testing has been almost exclusively adopted in all areas of empirical research in business, including accounting, economics, finance, management, and marketing. In view of the above points, Rao and Lovric (2016) argue that “in the 21st century, statisticians will deal with large data sets and complex questions, it is clear that the current point-null paradigm is inadequate” and that “ generation of statisticians must construct new tools for massive data sets since the current ones are severely limited” (see van der Laan and Rose 2010) They call for a paradigm shift in statistical hypothesis testing and suggest the Hodges and Lehmann (1954) paradigm as a possible alternative, arguing that this will substantially improve the credibility of scientific research based on statistical testing.

Current Paradigm and Its Deficiencies
A Simple t-Test for a Point Null Hypothesis
Shortcomings of the p-Value Criterion
Zero-Probability Paradox
Problems and Consequences
Test for Minimum Effect
Test for Equivalence
Test for Non-Inferiority
Interval Tests in the Linear Regression Model
Bootstrap Implementation
Model Equivalence Test
Equivalence Test for Model Validation
Choosing the Limits of Economic Significance
A SAD Stock Market Cycle
Empirical Validity of an Asset-Pricing Model
GRS Test
LR Test
Testing for Persistence of a Time Series
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call