Abstract

For solving online the linear matrix equation AXB = C with time-varying coefficients, this paper presents a special kind of recurrent neural networks by using a design method recently proposed by Zhang et al. Compared with gradient neural networks (abbreviated as GNN, or termed as gradient-based neural networks), the resultant Zhang neural network (termed as such and abbreviated as ZNN hereafter for presentation convenience) is designed based on a matrix-valued error function, instead of a scalar-valued error function. Zhang neural network is deliberately developed in the way that its trajectory could be guaranteed to globally exponentially converge to the time-varying theoretical solution of given linear matrix equation. In addition, Zhang neural network is described by an implicit dynamics, instead of an explicit dynamics usually describing recurrent neural networks. Convergence results of Zhang neural network are presented to show the neural-network performance. In comparison, we develop and simulate the gradient neural network as well, which is exploited to solve online the time-varying linear matrix equation. Computer-simulation results substantiate the theoretical efficacy and superior performance of Zhang neural network for the online solution of time-varying linear matrix equation, especially when using a power-sigmoid activation function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call