Abstract
Model selection is an important problem in mining information from large data bases. For example, in selecting a regression model, there may be J independent variables from which to choose, giving 2J feasible possible combinations of models from which to choose. Information criteria such as Akaike's (1973) Information Criterion (AIC) and Bozdogan's (1988, 1990, 1994, 2000, 2004) Information Measure of Complexity (ICOMP) Criterion, provide a method defining the 'best' solution, by providing an estimate of the measure of difference between a given model and the true model. In this paper, we introduce a new exact implicit enumeration (IE) algorithm to identify the subset of variables that minimises the information criterion. The IE algorithm uses efficient bounding strategies for the nonlinear objective function of the model selection problem. In computational tests, the IE algorithm outperforms the existing exact algorithms from the literature. The IE algorithm also has the advantage of being the only exact algorithm that can be used with all of the existing information criteria, including ICOMP. ICOMP has the advantage that it explicitly takes into account the effect of the covariance of the variables on parameter estimation in the model selection process and that it makes also no assumption that the parameter estimates are unbiased.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.