Abstract

In this brief, an improved reciprocally convex inequality is presented to analyse the problem of $H_{\infty }$ performance state estimation for static neural networks. A tight upper bound of time-derivative for the Lyapunov functional is handled by the improved reciprocally convex inequality. Then, a less conservative $H_{\infty }$ performance state estimation criterion is derived. As a result, the criterion is employed to present a method for designing suitable estimator gain matrices. A numerical example is used to illustrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call