Abstract

Continuous-time optimization is currently an active field of research in optimization theory; prior work in this area has yielded useful insights and elegant methods for proving stability and convergence properties of the continuous-time optimization algorithms. This article proposes novel gradient-flow schemes that yield convergence to the optimal point of a convex optimization problem within a fixed time from any given initial condition for unconstrained optimization, constrained optimization, and min-max problems. It is shown that the solution of the modified gradient-flow dynamics exists and is unique under certain regularity conditions on the objective function, while fixed-time convergence to the optimal point is shown via Lyapunov-based analysis. The application of the modified gradient flow to unconstrained optimization problems is studied under the assumption of gradient dominance, a relaxation of strong convexity. Then, a modified Newton's method is presented that exhibits fixed-time convergence under some mild conditions on the objective function. Building upon this method, a novel technique for solving convex optimization problems with linear equality constraints that yields convergence to the optimal point in fixed time is developed. Finally, the general min-max problem is considered, and a modified saddle-point dynamics to obtain the optimal solution in fixed time is developed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call