Abstract

This study endeavors to address the problem of asynchronous state estimation for generalized Markov jump memristive neural networks in the presence of a piecewise homogeneous Markov process and a memory-based dynamic event-triggered protocol. To avoid the commonly made assumption of time-invariant transition probabilities, a piecewise-homogeneous Markov chain is presented, wherein the variations of transition probabilities are regulated by a higher-level Markov process. Additionally, a memory-based dynamic event-triggered protocol is proposed to leverage the useful information contained in historically launched packets and achieve better control performance. Moreover, a hidden nonhomogeneous Markov model strategy is utilized to identify the modes mismatch between the estimator and the Markov jump memristive neural networks. This greatly reduces the complexity of processing the mode information of the system. Eventually, to verify the effectiveness of the theoretical results, a simulation example is carried out.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call