Abstract

The conjugate gradient projection method is one of the most effective methods for solving large-scale monotone nonlinear equations with convex constraints. In this paper, a new conjugate parameter is designed to generate the search direction, and an adaptive line search strategy is improved to yield the step size, and then, a new conjugate gradient projection method is proposed for large-scale monotone nonlinear equations with convex constraints. Under mild conditions, the proposed method is proved to be globally convergent. A large number of numerical experiments for the presented method and its comparisons are executed, which indicates that the presented method is very promising. Finally, the proposed method is applied to deal with the recovery of sparse signals.

Highlights

  • IntroductionMany algorithms have been proposed to deal with (1) during the past few decades (see, e.g., [9,10,11,12,13,14,15,16]), such as the projected Newton method [9], the projected quasi-Newton method [10,11,12,13], the Levenberg–Marquardt method [14], the trust region method [15], and the Lagrangian global method [16]

  • Many algorithms have been proposed to deal with (1) during the past few decades, such as the projected Newton method [9], the projected quasi-Newton method [10,11,12,13], the Levenberg–Marquardt method [14], the trust region method [15], and the Lagrangian global method [16]. These methods converge rapidly if the sufficiently good initial points are chosen. They are not well-suited for solving large-scale constrained nonlinear equations due to the computation of the Jacobian matrix or its approximation at each iteration. erefore, in the past few years, the projected derivative-free method (PDFM) has become more and more popular, i.e., the spectral gradient projection method [17,18,19], the multivariate spectral gradient-type projection method [20, 21], and the conjugate gradient projection method (CGPM) [22,23,24,25,26,27,28,29,30,31], and more other PDFM can be seen in references [32, 33]

  • The original signal x0 contains 24 nonzero elements randomly. e random matrix A is given by the command rand (n, k) in Matlab, and the observed data b is obtained by b Ax0 + e, where e is the Gaussian noise. f(x) τ‖x‖1 + (1/2)‖Ax − b‖22 denotes the merit function, and the value τ is obtained by the same continuation technique for the abovementioned two algorithms

Read more

Summary

Introduction

Many algorithms have been proposed to deal with (1) during the past few decades (see, e.g., [9,10,11,12,13,14,15,16]), such as the projected Newton method [9], the projected quasi-Newton method [10,11,12,13], the Levenberg–Marquardt method [14], the trust region method [15], and the Lagrangian global method [16] As we know, these methods converge rapidly if the sufficiently good initial points are chosen. These methods converge rapidly if the sufficiently good initial points are chosen They are not well-suited for solving large-scale constrained nonlinear equations due to the computation of the Jacobian matrix or its approximation at each iteration. They are not well-suited for solving large-scale constrained nonlinear equations due to the computation of the Jacobian matrix or its approximation at each iteration. erefore, in the past few years, the projected derivative-free method (PDFM) has become more and more popular, i.e., the spectral gradient projection method [17,18,19], the multivariate spectral gradient-type projection method [20, 21], and the conjugate gradient projection method (CGPM) [22,23,24,25,26,27,28,29,30,31], and more other PDFM can be seen in references [32, 33]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call