Abstract
Model selection is an omnipresent problem in signal processing applications. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are the most commonly used solutions to this problem. These criteria have been found to have satisfactory performance in many cases and had a dominant role in the model selection literature since their introduction several decades ago, despite numerous attempts to dethrone them. Model selection can be viewed as a multiple hypothesis testing problem. This simple observation makes it possible to use for model selection a number of powerful hypothesis testing procedures that control the false discovery rate (FDR) or the familywise error rate (FER). This is precisely what we do in this paper in which we follow the lead of the proposers of the said procedures and introduce two <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">general</i> rules for model selection based on FDR and FER, respectively. We show in a numerical performance study that the FDR and FER rules are serious competitors of AIC and BIC with significant performance gains in more demanding cases, essentially at the same computational effort.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.