Abstract

Aggregate function is a useful smoothing function to the max-function of some smooth functions and has been used to solve minimax problems, linear and nonlinear programming, generalized complementarity problems, etc. The aggregate function is a single smooth but complex function, its gradient and Hessian calculations are time-consuming. In this paper, a truncated aggregate smoothing stabilized Newton method for solving minimax problems is presented. At each iteration, only a small subset of the components in the max-function are aggregated, hence the number of gradient and Hessian calculations is reduced dramatically. The subset is adaptively updated with some truncating criterions, concerning only with computation of function values and not their gradients or Hessians, to guarantee the global convergence and, for the inner iteration, locally quadratic convergence with as few computational cost as possible. Numerical results show the efficiency of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.