Abstract

Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data. However, most existing CSC algorithms operate in the batch mode and are computationally expensive. In this paper, we alleviate this problem by online learning. The key is a reformulation of the CSC objective so that convolution can be handled easily in the frequency domain, and much smaller history matrices are needed. To solve the resultant optimization problem, we use the alternating direction method of multipliers (ADMMs), and its subproblems have efficient closed-form solutions. Theoretical analysis shows that the learned dictionary converges to a stationary point of the optimization problem. Extensive experiments are performed on both the standard CSC benchmark data sets and much larger data sets such as the ImageNet. Results show that the proposed algorithm outperforms the state-of-the-art batch and online CSC methods. It is more scalable, has faster convergence, and better reconstruction performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.