Abstract
Distributed optimization has gained significant attention in recent years, primarily fueled by the availability of a large amount of data and privacy-preserving requirements. This paper presents a fixed-time convergent optimization algorithm for solving a potentially non-convex optimization problem using a first-order multi-agent system. Each agent in the network can access only its private objective function, while local information exchange is permitted between the neighbors. The proposed optimization algorithm combines a fixed-time convergent distributed parameter estimation scheme with a fixed-time distributed consensus scheme as its solution methodology. The results are presented under the assumption that the team objective function is strongly convex, as opposed to the common assumptions in the literature requiring each of the local objective functions to be strongly convex. The results extend to the class of possibly non-convex team objective functions satisfying only the Polyak-Lojasiewicz (PL) inequality. It is also shown that the proposed continuous-time scheme, when discretized using Euler's method, leads to consistent discretization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.