Abstract

The kernel method of machine learning is to transform data from data space to reproducing kernel Hilbert space (RKHS) and then perform machine learning in RKHS, while kernel learning is to select the best RKHS for specific applications and given learning samples. Since RKHS can be generated from kernel functions, kernel learning is to learn kernel functions. At present, the dilemma of kernel learning is that there are few kinds of kernel functions available for learning. The first contribution of this paper is to propose a new framework of kernel functions, in which the given learning samples can be embedded. Moreover, the framework contains a learnable part which can be optimized for specific applications. Symmetric positive definite (SPD) matrix data are more and more common in machine learning. However, SPD data space does not constitute a linear space and dictionary learning involves a lot of linear operations. Therefore, dictionary learning cannot be performed directly on SPD data space. The second contribution of this paper is to apply the proposed framework of kernel functions to dictionary learning of SPD data, in which SPD data are first transformed to the RKHS produced by the proposed framework, and then, both dictionary and the learnable part of the framework are learned simultaneously in RKHS. The experimental results on 4 landmark datasets show that the proposed algorithm performs better than 6 other algorithms published recently in top academic journals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call