Abstract

<p style='text-indent:20px;'>We consider proximal gradient methods for minimizing a composite function of a differentiable function and a convex function. To accelerate the general proximal gradient methods, we focus on proximal quasi-Newton type methods based on proximal mappings scaled by quasi-Newton matrices. Although it is usually difficult to compute the scaled proximal mappings, applying the memoryless symmetric rank-one (SR1) formula makes this easier. Since the scaled (quasi-Newton) matrices must be positive definite, we develop an algorithm using the memoryless SR1 formula based on a modified spectral scaling secant condition. We give the subsequential convergence property of the proposed method for general objective functions. In addition, we show the R-linear convergence property of the method under a strong convexity assumption. Finally, some numerical results are reported.</p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call