Abstract
This paper addresses a robust convolutional dictionary learning method against outliers. Convolutional dictionary learning approximates a signal with the sum of dictionary filters and corresponding coefficients, and its cost function consists of the weighted sum of the two terms: error and regularization terms. Many studies employ the l<inf>2</inf> and the l<inf>1</inf> norms for the former and the latter respectively, and to increase the robustness, the l<inf>1</inf> norm is substituted for the error term. For such optimization problems with the sum of the two convex terms, the proximal gradient method is a powerful solver; however, it is not applicable for the two l<inf>1</inf> terms, of which gradient is not continuous at any point. This paper tries to apply the Moreau envelope for the l<inf>1</inf> error term, and the l<inf>1</inf> error is expressed as Huber error function, which is differentiable and Lipschitz continuous. Experimental results show that dictionaries generated with the proposed method are robuster than those with the l<inf>2</inf> error term.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.