A recent article in this journal by Tabak (2006) highlighted a potentially serious source of bias that can arise when multiple statistical tests of a damages theory are performed and even one test rejecting the null hypothesis is regarded as supporting the damages theory. Repeated testing will eventually produce a “false discovery,” that is, rejection of the null hypothesis in favor of the alternative hypothesis when the null hypothesis is true, which statisticians refer to as type I error. Consequently, performing multiple tests without adjusting the critical value can be problematical because it can lead to improperly accepting statistical evidence that apparently supports rejection of the null hypothesis as reliable when it is not. Tabak (2006) recommends making the Sidak (1968, 1971) multiple-comparison adjustment to the standard statistical t-test to correct for the false-discovery bias inherent in multiple-comparison testing. In particular, he recommends making this adjustment when performing 10b-5 securities fraud event studies when more than one corrective disclosure date is involved. This article clarifies the circumstances in which a multiple-comparison adjustment is appropriate and explains why the correction is normally not needed in securities fraud event-study testing. More generally, I explain why it is not required when each of several tests is performed and its results are reported separately, as for example, where the objective is simply to test the statistical significance of the abnormal stock return on each day on which a new and distinct curative disclosure occurred. I show that the Sidak multiple-comparison adjustment is nearly as stringent as the classical Bonferroni procedure (Simes, 1986), which can increase the risk of type II error. This article discusses a more powerful alternative to the Sidak adjustment due to Benjamini and Hochberg (1993), which directly corrects for the false-discovery bias in multiple-comparison testing and reduces the risk of type II error. II. Application of Multiple-Comparison Adjustments to Securities Fraud Event Studies Multiple-comparison-false-discovery bias can arise when (a) several statistical tests are performed on subsets of the same larger data set in an effort to
Read full abstract