Abstract

Convolutional sparse representations differ from the standard form in representing the signal to be decomposed as the sum of a set of convolutions with dictionary filters instead of a linear combination of dictionary vectors. The advantage of the convolutional form is that it provides a single-valued representation optimised over an entire signal. The substantial computational cost of the convolutional sparse coding and dictionary learning problems has recently been shown to be greatly reduced by solving in the frequency domain, but the periodic boundary conditions imposed by this approach have the potential to create boundary artifacts. The present paper compares different approaches to avoiding these effects in both sparse coding and dictionary learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call