Abstract

This paper addresses a robust convolutional dictionary learning method against outliers. Convolutional dictionary learning approximates a signal with the sum of dictionary filters and corresponding coefficients, and its cost function consists of the weighted sum of the two terms: error and regularization terms. Many studies employ the l<inf>2</inf> and the l<inf>1</inf> norms for the former and the latter respectively, and to increase the robustness, the l<inf>1</inf> norm is substituted for the error term. For such optimization problems with the sum of the two convex terms, the proximal gradient method is a powerful solver; however, it is not applicable for the two l<inf>1</inf> terms, of which gradient is not continuous at any point. This paper tries to apply the Moreau envelope for the l<inf>1</inf> error term, and the l<inf>1</inf> error is expressed as Huber error function, which is differentiable and Lipschitz continuous. Experimental results show that dictionaries generated with the proposed method are robuster than those with the l<inf>2</inf> error term.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call