Abstract
Statistical pattern classification techniques have been successfully applied to many practical classification problems. In real-world applications, the challenge is often to cope with patterns that lead to unreliable classification decisions. These situations occur either due to unexpected patterns, i.e., patterns which occur in the regions far from the training data or due to patterns which occur in the overlap region of classes. This paper proposes a method for estimating the reliability of a classifier to cope with these situations. While existing methods for quantifying the reliability are often solely based on the class membership probability estimated on global approximations, in this paper, the reliability is quantified in terms of a confidence interval on the class membership probability. The size of the confidence interval is calculated explicitly based on the local density of training data in the neighborhood of a test pattern. A synthetic example is given to illustrate the various aspects of the proposed approach. In addition, experimental evaluation on real data sets is conducted to demonstrate the effectiveness of the proposed approach to detect unexpected patterns. The lower bound of the confidence interval is used to detect the unexpected patterns. By comparing the performance with the state-of-the-art methods, we show our approach is well-founded.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.