Abstract

In this paper, the problem of minimizing a functionf(x) subject to a constraint ϕ(x)=0 is considered, wheref is a scalar,x ann-vector, and ϕ aq-vector, withq <n. Several conjugate gradient-restoration algorithms are analyzed: these algorithms are composed of the alternate succession of conjugate gradient phases and restoration phases. In the conjugate gradient phase, one tries to improve the value of the function while avoiding excessive constraint violation. In the restoration phase, one tries to reduce the constraint error, while avoiding excessive change in the value of the function. Concerning the conjugate gradient phase, two classes of algorithms are considered: for algorithms of Class I, the multiplier λ is determined so that the error in the optimum condition is minimized for givenx; for algorithms of Class II, the multiplier λ is determined so that the constraint is satisfied to first order. Concerning the restoration phase, two topics are investigated: (a) restoration type, that is, complete restoration vs incomplete restoration and (b) restoration frequency, that is, frequent restoration vs infrequent restoration. Depending on the combination of type and frequency of restoration, four algorithms are generated within Class I and within Class II, respectively: Algorithm (α) is characterized by complete and frequent restoration; Algorithm (β) is characterized by incomplete and frequent restoration; Algorithm (γ) is characterized by complete and infrequent restoration; and Algorithm (δ) is characterized by incomplete and infrequent restoration. If the functionf(x) is quadratic and the constraint ϕ(x) is linear, all of the previous algorithms are identical, that is, they produce the same sequence of points and converge to the solution in the same number of iterations. This number of iterations is at mostN* =n −q if the starting pointx s is such that ϕ(x s)=0, and at mostN*=1+n −q if the starting pointx s is such that ϕ(x s) ≠ 0. In order to illustrate the theory, five numerical examples are developed. The first example refers to a quadratic function and a linear constraint. The remaining examples refer to a nonquadratic function and a nonlinear constraint. For the linear-quadratic example, all the algorithms behave identically, as predicted by the theory. For the nonlinear-nonquadratic examples, Algorithm (II-δ), which is characterized by incomplete and infrequent restoration, exhibits superior convergence characteristics. It is of interest to compare Algorithm (II-δ) with Algorithm (I-α), which is the sequential conjugate gradient-restoration algorithm of Ref. 1 and is characterized by complete and frequent restoration. For the nonlinear-nonquadratic examples, Algorithm (II-δ) converges to the solution in a number of iterations which is about one-half to two-thirds that of Algorithm (I-α).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.