Abstract

This paper presents a framework to solve con strained optimization problems in an accelerated manner based on High-Order Tuners (HT). Our approach is based on reformulating the original constrained problem as the unconstrained optimization of a loss function. We start with convex optimization problems and identify the conditions under which the loss function is convex. Building on the insight that the loss function could be convex even if the original optimization problem is not, we extend our approach to a class of nonconvex optimization problems. The use of a HT together with this approach enables us to achieve a convergence rate better than state-of-the-art gradient-based methods. Moreover, for equality-constrained optimization problems, the proposed method ensures that the state remains feasible throughout the evolution, regardless of the convexity of the original problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call