Abstract

This paper investigates the H∞ performance state estimation problem for static neural networks with time-varying delays. A parameter-dependent reciprocally convex inequality (PDRCI) is presented, which encompasses some existing results as its special cases. By dividing the estimation error of activation function into two parts, an improved Lyapunov-Krasovskii functional (LKF) is constructed, in which the slope information of activation function (SIAF) can be fully captured. Combining PDRCI and the improved LKF, a new criterion is obtained to ensure the estimation error system to be asymptotically stable with H∞ performance. By using a decoupling principle, the estimator gain matrices are solved in terms of linear matrix inequalities (LMIs). Compared with some existing works, the restrictions on slack matrices are overcome, which directly leads to performance improvement and reduction of conservativeness in the estimator solution. Two examples are illustrated to verify the advantages of the developed criterion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call