Abstract

The maximization of one-dimensional piecewise linear concave (OPLC) functions arises in the line search associated with the maximization of piecewise linear concave functions (e.g. Kelley cutting plane method). The OPLC line search is usually done by the next-break-point method, where one goes from break point to break point up to the optimum. If the number of break points is large this method will be computationally expensive. One can also use some classical derivative-free line search method as for example the golden section method. Such methods do not take advantage of the OPLC geometry. As an alternative, we propose an improved version of the so-called radar method, which maximizes an OPLC function by maximizing successive outer approximations. We prove superlinear and finite convergence of the radar method. Furthermore, our computational test shows that the radar method is highly effective independently from the number of break points.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.