Abstract

One-class learning is a classical and hard computational intelligence task. In the literature, there are some effective and powerful solutions to address the problem. There are examples in the kernel machines realm, Support Vector Domain Description, and the recently proposed Import Vector Domain Description (IVDD), which directly delivers the sample probability of belonging to the class. Here, we propose and discuss two optimization techniques for IVDD to significantly improve the memory footprint and consequently to scale to datasets that are larger than the original formulation. We propose two strategies. First, we propose using random features to approximate the gaussian kernel together with a primal optimization algorithm. Second, we propose a Nystrom-like approximation of the functional together with a fast converging and accurate self-consistent algorithm. In particular, we replace the a posteriori sparsity of the original optimization method of IVDD by randomly selecting a priori landmark samples in the dataset. We find this second approximation to be superior. Compared to the original IVDD with the RBF kernel, it achieves high accuracy, is much faster, and grants huge memory savings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call