Abstract

In recent years, biomedicine has been faced with difficult high-throughput small-sample classification problems. In such settings, classifier error estimation becomes a critical issue because training and testing must be done on the same data. A recently proposed error estimator places the problem in a signal estimation framework in the presence of uncertainty, permitting a rigorous solution optimal in a minimum-mean-square error sense. The uncertainty in this model is relative to the parameters of the feature-label distributions, resulting in a Bayesian approach to error estimation. Closed form solutions are available for two important problems: discrete classification with Dirichlet priors and linear classification of Gaussian distributions with normal-inverse-Wishart priors. In this work, Part I of a two-part study, we introduce the theoretical mean-square-error (MSE) conditioned on the observed sample of any estimate of the classifier error, including the Bayesian error estimator, for both Bayesian models. Thus, Bayesian error estimation has a unique advantage in that its mathematical framework naturally gives rise to a practical expected measure of performance given an observed sample. In Part II of the study we examine consistency of the error estimator, demonstrate various MSE properties, and apply the conditional MSE to censored sampling.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.