Abstract

In this paper, we present dictionary learning methods for sparse signal representations in a high dimensional feature space. Using the kernel method, we describe how the well known dictionary learning approaches, such as the method of optimal directions and KSVD, can be made nonlinear. We analyze their kernel constructions and demonstrate their effectiveness through several experiments on classification problems. It is shown that nonlinear dictionary learning approaches can provide significantly better performance compared with their linear counterparts and kernel principal component analysis, especially when the data is corrupted by different types of degradations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call