Abstract

Dynamic neural networks are efficient for solving algebraic equations. Among them, the gradient neural network (GNN) has the lowest model complexity. The conventional GNN models have exponential convergence when dealing with static Lyapunov equations (LEs), but fail to find the accurate solution when the solutions change with time. To fill this gap, in this paper, we propose an improved GNN method by introducing additional nonlinearity into a traditional GNN model for handling time-varying LEs. It is shown that the proposed improved GNN (IGNN) method has finite-time convergence when it is applied to time-varying LEs, even if there are bounded additive time-varying noises. Simulation results demonstrate the efficacy and advantages of the proposed method over the existing GNN methods and other dynamic neural network methods for time-varying LEs solving.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.