Abstract

Based on the problem of detecting the number of signals, this paper provides a systematic empirical investigation on model selection performances of several classical criteria and recently developed methods (including Akaike’s information criterion (AIC), Schwarz’s Bayesian information criterion, Bozdogan’s consistent AIC, Hannan-Quinn information criterion, Minka’s (MK) principal component analysis (PCA) criterion, Kritchman & Nadler’s hypothesis tests (KN), Perry & Wolfe’s minimax rank estimation thresholding algorithm (MM), and Bayesian Ying-Yang (BYY) harmony learning), by varying signal-to-noise ratio (SNR) and training sample size N. A family of model selection indifference curves is defined by the contour lines of model selection accuracies, such that we can examine the joint effect of N and SNR rather than merely the effect of either of SNR and N with the other fixed as usually done in the literature. The indifference curves visually reveal that all methods demonstrate relative advantages obviously within a region of moderate N and SNR. Moreover, the importance of studying this region is also confirmed by an alternative reference criterion by maximizing the testing likelihood. It has been shown via extensive simulations that AIC and BYY harmony learning, as well as MK, KN, and MM, are relatively more robust than the others against decreasing N and SNR, and BYY is superior for a small sample size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call