Abstract

This paper presents our study of the problems associated with learning supervised kernels from a large amount of side information. We propose a new loss function derived from the Laplacian matrix of a special complete graph that is generated from the side information. We analyze the relationship between the proposed loss function and the kernel alignment. Our theoretical analysis shows that the proposed loss function has a close relationship with kernel alignment, that is, they both make use of side information that is fused in a matrix, in addition to a similar regularization strategy. Moreover, the proposed loss function has a linear form, and thus it is more efficient in learning side information than kernel alignment that has to be performed nonlinearly. The proposed loss function is used to generate new kernels as “low-cost” alternatives of kernels learned by certain state-of-the-art methods. The empirical results demonstrate the superiority of the proposed method over state-of-the-art methods in terms of classification accuracy and computational cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call