Abstract In recent decades, neural networks methods have widely applied to many science and engineering fields. Zhang neural network (ZNN) as a special type of recurrent neural networks was proposed by Zhang et al, which has been applied to different time-variant problems solving. ZNNs are usually used to process the continuous-time signal in time-variant systems as continuous-time ZNN (CTZNN) models. Since digit devices and computers are widely applied in science and engineering, it is necessary to develop a general method to discretize CTZNN models to discrete-time ZNN (DTZNN) models. In previous work, Euler forward difference and two types of three-step Zhang et al. discretization (ZeaD) formulas were applied to discretize CTZNN models. In this paper, a three-step general ZeaD formula based on Taylor expansion is designed to approximate the first-order derivative of the target point, and discretize CTZNN models for time-variant matrix inversion. In comparison, the two types of three-step ZeaD formulas are the special cases of the proposed general ZeaD formula. For the situation of the time derivative of objective matrix unknown, two formulas of estimating the derivative are provided, and two other corresponding three-step general DTZNN models are proposed. Theoretical analyses present the stability and convergence of the three general DTZNN models for time-variant matrix inversion. The numerical experiment results substantiate the efficacy and superiority of the three proposed general DTZNN models for time-variant matrix inversion with the theoretical steady-state residual errors, comparing with those of Newton iteration and one-step DTZNN model. In addition, by comparing the numerical results of the two general DTZNN models for the situation of the time derivative of objective matrix unknown with the one of the derivative known, the steady-state residual errors of the formers are slightly bigger than the latter.
Read full abstract