Abstract

This paper aims to present a new class of recurrent neural networks based on projection operators for solving variational inequalities and related optimization problems subject to linear equalities and bound constraints. In the proposed approach, the neural network structure is based on the Karush–Kuhn–Tucker (KKT) optimality conditions. Instead of commonly used activation functions, the KKT multipliers are considered control inputs and implemented with finite time stabilizing terms based on unit control. The output variables of the neural network are proven to be stable in the sense of Lyapunov and convergent to optimal solutions in finite time. In addition, a methodology based on the particle swarm optimization (PSO) algorithm is presented for the optimal selection of network design parameters. The main advantage of the proposed neural network is its fixed number of parameters regardless of problem dimension, enabling the network to be easily scalable from a lower to a higher dimension. Finally, simulation results of numerical examples are provided to illustrate the effectiveness and performance of the proposed neural network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call