Abstract
As a state-of-the-art novelty detection method, Kernel Null Foley–Sammon Transform (KNFST) could identify multiple known classes and detect novelties from an unknown class via a single model. However, KNFST only captures the global information of the training set. The local geometrical structure is neglected. In this paper, a manifold is incorporated into KNFST to solve this issue. First, we use manifold graphs to depict the local structure for within-class scatter and total scatter. Second, the training samples from the same class are mapped into a single point in null space via null projected directions (NPDs). The proposed method can overcome the weakness of KNFST caused by ignoring local geometrical structure in the class. The experimental results on several toy and benchmark datasets show that manifold learning novelty detection (MLND) is superior to KNFST.
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have