Abstract

Recently, deep neural networks (DNNs) have shown advantages in accelerating optimization algorithms. One approach is to unfold finite number of iterations of conventional optimization algorithms and to learn parameters in the algorithms. However, these are forward methods and are indeed neither iterative nor convergent. Here, we present a novel DNN-based convergent iterative algorithm that accelerates conventional optimization algorithms. We train a DNN to yield parameters in scaled gradient projection method. So far, these parameters have been chosen heuristically, but have shown to be crucial for good empirical performance. In simulation results, the proposed method significantly improves the empirical convergence rate over conventional optimization methods for various large-scale inverse problems in image processing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.