Abstract

In this paper, we present two-timescale neurodynamic optimization approaches to distributed minimax optimization. We propose four multilayer recurrent neural networks for solving four different types of generally nonlinear convex–concave minimax problems subject to linear equality and nonlinear inequality constraints. We derive sufficient conditions to guarantee the stability and optimality of the neural networks. We demonstrate the viability and efficiency of the proposed neural networks in two specific paradigms for Nash-equilibrium seeking in a zero-sum game and distributed constrained nonlinear optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call