Abstract

Inspired by the recent success of the proximal gradient method (PGM) and recent efforts to develop an inertial algorithm, we propose an inertial PGM (IPGM) for convolutional dictionary learning (CDL) by jointly optimizing both an ℓ2-norm data fidelity term and a sparsity term that enforces an ℓ1 penalty. Contrary to other CDL methods, in the proposed approach, the dictionary and needles are updated with an inertial force by the PGM. We obtain a novel derivative formula for the needles and dictionary with respect to the data fidelity term. At the same time, a gradient descent step is designed to add an inertial term. The proximal operation uses the thresholding operation for needles and projects the dictionary to a unit-norm sphere. We prove the convergence property of the proposed IPGM algorithm in a backtracking case. Simulation results show that the proposed IPGM achieves better performance than the PGM and slice-based methods that possess the same structure and are optimized using the alternating-direction method of multipliers (ADMM).

Highlights

  • Sparse representation is a popular a priori mathematical modeling approach in various signal and image processing applications

  • Experimental Results and Analysis we compare the performance of the proposed inertial PGM (IPGM) with that of various existing methods with respect to solving convolutional dictionary learning (CDL) problems

  • The peak signal-to-noise ratio (PSNR) of the IPGM algorithm is best with 29.438 dB, which is the highest value among the four algorithms

Read more

Summary

Introduction

Sparse representation is a popular a priori mathematical modeling approach in various signal and image processing applications. In the convolutional sparse representation model, the convolutional dictionary learning (CDL) process plays an important role, but the associated algorithm and convergence proof are difficult problems [1]. The convolutional sparse representation model has achieved impressive results in various applications. The research results in this field have important theoretical and practical value for a variety of signal and image processing applications. In such a case, a typical assumption is that a signal y ∈ RN, y = DΓ is a linear combination of columns, known as atoms. Given y and D, this task of finding its sparsest representation is equivalent to solving the following problem: min

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.