Abstract
Abstract This paper considers the problems of determining the robust exponential stability and estimating the exponential convergence rate for delayed neural networks with parametric uncertainties and time delay. The relationship among the time-varying delay, its upper bound, and their difference is taken into account. Theoretic analysis shows that our result includes a previous result derived in the literature. As illustrations, the results are applied to several concrete models studied in the literature, and a comparison of results is given.
Highlights
Time delays are often encountered in various practical systems such as chemical processes, neural networks, and long transmission lines in pneumatic systems
A considerable number of sufficient conditions on the existence, uniqueness, and global asymptotic stability of equilibrium point for neural networks with constant delays or time-varying delays were reported under some assumptions; for example, see [, – ] and references therein
Note that we mainly focus on the effects brought about by the maximum allowable delay bound (MADB) h and exponential convergence rate (MAECR) α in this paper
Summary
Time delays are often encountered in various practical systems such as chemical processes, neural networks, and long transmission lines in pneumatic systems. A considerable number of sufficient conditions on the existence, uniqueness, and global asymptotic stability of equilibrium point for neural networks with constant delays or time-varying delays were reported under some assumptions; for example, see [ , , , , , , – ] and references therein. A free-weighting matrix approach [ ] has been employed to study the exponential stability problem for neural networks with a time-varying delay [ ]. How to overcome the aforementioned disadvantages of the integral inequality approach (IIA) is an important research topic in the delay-dependent related problem and motivates the work of this paper on exponential stability analysis. A global robust exponential stability of the delayed neural networks with time-varying delays is proposed. From ( . ) and the Schur complement, it is easy to see that V (xt) < holds if R – X ≥ , R – Y ≥
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.