Abstract

Given a set of dictionary filters, the most widely used formulation of the convolutional sparse coding (CSC) problem is Convolutional BPDN (CBPDN), in which an image is represented as a sum over a set of convolutions of coefficient maps; usually, the coefficient maps are l 1 -norm penalized in order to enforce a sparse solution. Recent theoretical results, have provided meaningful guarantees for the success of popular l 1 -norm penalized CSC algorithms in the noiseless case. However, experimental results related to the l 0 -norm penalized CSC case have not been addressed.In this paper we propose a two-step l 0 -norm penalized CSC (l 0 -CSC) algorithm, which outperforms (convergence rate, reconstruction performance and sparsity) known solutions to the l 0 -CSC problem. Furthermore, our proposed algorithm, which is a convolutional extension of our previous work [1], originally develop for the l 0 regularized optimization problem, includes an escape strategy to avoid being trapped in a saddle points or in inferior local solutions, which are common in nonconvex optimization problems, such those that use the l 0 -norm as the penalty function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call