Abstract

Novelty detection is a binary task aimed at identifying whether a test sample is novel or unusual compared to a previously observed training set. A typical approach is to consider distance as a criterion to detect such novelties. However, most previous work does not focus on finding an optimum distance for each particular problem. In this paper, we propose to detect novelties by exploiting non-linear distances learned from multi-class training data. For this purpose, we adopt a kernelization technique jointly with the Large Margin Nearest Neighbor (LMNN) metric learning algorithm. The optimum distance tries to keep each known class’ instances together while pushing instances from different known classes to remain reasonably distant. We propose a variant of the K-Nearest Neighbors (KNN) classifier that employs the learned distance to detect novelties. Besides, we use the learned distance to perform multi-class classification. We show quantitative and qualitative experiments conducted on synthetic and real data sets, revealing that the learned metrics are effective in improving novelty detection compared to other metrics. Our method also outperforms previous work regularly used for novelty detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.