Abstract

Quick extraction of eigenpairs of a real symmetric matrix is very important in engineering. Using neural networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise functional neural network (FNN) to compute the largest (or smallest) eigenvalue and one its eigenvector. When the FNN is converted into a differential equation, the component analytic solution of this equation is obtained. Using the component solution, the convergence properties are fully analyzed. On the basis of this FNN, the method that can compute the largest (or smallest) eigenvalue and one its eigenvector whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity of the method. Comparing with other neural networks designed for the same aim, the proposed FNN is very simple and concise, so it is very easy to be realized

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call