Abstract

The current leading algorithms for both convolutional sparse coding and dictionary learning are based on variable splitting and Augmented Lagrangian methods. The dictionary learning algorithms alternate between sparse coding and dictionary subproblems, typically interleaving the updates for each of these two subproblems. Due to the variable splitting, in each subproblem one of these two variables must be chosen to be passed to the other subproblem. We perform a careful comparison of the algorithm convergence resulting from the different choices in conjunction with a number of different algorithms for the dictionary subproblem, showing that one of these choices consistently provides the best convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call