Abstract

In this work, we consider methods for solving large-scale optimization problems with a possibly nonsmooth objective function. The key idea is to first parametrize a class of optimization methods using a generic iterative scheme involving only linear operations and applications of proximal operators. This scheme contains some modern primal-dual first-order algorithms like the Douglas--Rachford and hybrid gradient methods as special cases. Moreover, we show weak convergence of the iterates to an optimal point for a new method which also belongs to this class. Next, we interpret the generic scheme as a neural network and use unsupervised training to learn the best set of parameters for a specific class of objective functions while imposing a fixed number of iterations. In contrast to other approaches of “learning to optimize," we present an approach which learns parameters only in the set of convergent schemes. Finally, we illustrate the approach on optimization problems arising in tomographic reconstruction and image deconvolution, and train optimization algorithms for optimal performance given a fixed number of iterations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.