Abstract
Recently, a new model selection criterion called the subspace information criterion (SIC) was proposed. SIC gives an unbiased estimate of the generalization error with finite samples. In this paper, we theoretically and experimentally evaluate the effectiveness of SIC in comparison with existing model selection techniques. Theoretical evaluation includes the comparison of the generalization measure, approximation method, and restriction on model candidates and learning methods. The simulations show that SIC outperforms existing techniques especially when the number of training examples is small and the noise variance is large.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have