Abstract

Optimization algorithms have now played an important role in many fields, and the issue of how to design high-efficiency algorithms has gained increasing attention, for which it has been shown that advanced control theories could be helpful. In this paper, the fixed-time scheme and reset scheme are introduced to design high-efficiency gradient descent methods for unconstrained convex optimization problems. At first, a general reset framework for existing accelerated gradient descent methods is given based on the systematic representation, with which both convergence speed and stability are significantly improved. Then, the design of a novel adaptive fixed-time gradient descent, which has fewer tuning parameters and maintains better robustness to initial conditions, is presented. However, its discrete form introduces undesirable overshoot and easily leads to instability, and the reset scheme is then applied to overcome the drawbacks. The linear convergence and better stability of the proposed algorithms are theoretically proven, and several dedicated simulation examples are finally given to validate the effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call