Abstract

The concept presented in this paper is based on previous dynamical methods to realize a time-varying matrix inversion. It is essentially a set of coupled ordinary differential equations (ODEs) which does indeed constitute a recurrent neural network (RNN) model. The coupled ODEs constitute a universal modeling framework for realizing a matrix inversion provided the matrix is invertible. The proposed model does converge to the inverted matrix if the matrix is invertible, otherwise it converges to an approximated inverse. Although various methods exist to solve a matrix inversion in various areas of science and engineering, most of them do assume that either the time-varying matrix inversion is free of noise or they involve a denoising module before starting the matrix inversion computation. However, in the practice, the noise presence issue is a very serious problem. Also, the denoising process is computationally expensive and can lead to a violation of the real-time property of the system. Hence, the search for a new ‘matrix inversion’ solving method inherently integrating noise-cancelling is highly demanded. In this paper, a new combined/extended method for time-varying matrix inversion is proposed and investigated. The proposed method is extending both the gradient neural network (GNN) and the Zhang neural network (ZNN) concepts. Our new model has proven that it has exponential stability according to Lyapunov theory. Furthermore, when compared to the other previous related methods (namely GNN, ZNN, Chen neural network, and integration-enhanced Zhang neural network or IEZNN) it has a much better theoretical convergence speed. To finish, all named models (the new one versus the old ones) are compared through practical examples and both their respective convergence and error rates are measured. It is shown/observed that the novel/proposed method has a better practical convergence rate when compared to the other models. Regarding the amount of noise, it is proven that there is a very good approximation of the matrix inverse even in the presence of noise.

Highlights

  • Matrix inversion is extensively used in linear algebra

  • This method is called “gradient-based” dynamics. It can be designed by norm-based energy functions [45,49]. The advantage of this model is its easiness of implementation, but due to the effective factor of convergence which can be seen in Equation (9) it takes time to converge to the solution of the problem, and this convergence rate has an effect on noise sensitivity of the model as it makes the model more sensitive to the noises

  • Figure does display how the error sum up the results shown in 7 does how the errorisconverges converges towards zero obtained when theasdifferent models executed overdisplay time; the intention to hereby towards zero when the different models are executed over time; the intention is to hereby illustrate illustrate the convergence speed of all considered models

Read more

Summary

Introduction

Matrix inversion is extensively used in linear algebra (e.g., for solving linear equations). The above-mentioned good features of RNN (e.g., flexibility, speed-up (in presence of multiple cores), etc.) motivate the use of this promising paradigm for an accelerated solving of linear algebraic equations on either one-core or multi-core platforms This is not a simple conversion as we do face completely different hardware computing frameworks with different parameters to be taken care of. There exist several published works, in which one has tried to solve similar problems with dynamical systems like recurrent neural networks (RNN) [13,37,38,39,40,41,42,43] or artificial neural networks (ANN) [44,45,46,47], etc Most of those works are related to RNN, our work and the novel concepts presented in this paper have more potential to reliably provide faster convergence towards the solution of Equation (1). In the last section (i.e., Section 7), some concluding remarks summarize the quintessence of the key achievements in this work

Related Works of Dynamical Neural Networks
The Gradient Method
Zhang Dynamics
Chen Dynamics
Our Concept
Model Implementation in SIMULINK
Illustrative
Illustrative Example 1
Illustrative Example 2
Illustrative Example 3
Comparison of Our Novel Method with Previous Studies
Figure
Comparison of different
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call