Abstract

The present paper deals with neural algorithms to learn the singular value decomposition (SVD) of data matrices. The neural algorithms utilized in the present research endeavor were developed by Helmke and Moore (HM) and appear under the form of two continuous-time differential equations over the special orthogonal group of matrices. The purpose of the present paper is to develop and compare different numerical schemes, under the form of two alternating learning rules, to learn the singular value decomposition of large matrices on the basis of the HM learning paradigm. The numerical schemes developed here are both first-order (Euler-like) and second-order (Runge-like). Moreover, a reduced Euler scheme is presented that consists of a single learning rule for one of the factors involved in the SVD. Numerical experiments performed to estimate the optical-flow (which is a component of modern IoT technologies) in real-world video sequences illustrate the features of the novel learning schemes.

Highlights

  • The computation of the singular value decomposition (SVD) of a non-square matrix [1,2,3] plays a key role in a number of applications; among them, it is worth citing applications in automatic control [14], digital circuit design [15], time-series prediction [16], and image processing [17,18]

  • The neural algorithms utilized in the present research work to learn an SVD of a data matrix were developed by Helmke and Moore (HM) in [1]

  • An HM learning rule appeared under the form of two continuous-time differential equations over the special orthogonal group of matrices

Read more

Summary

Introduction

The computation of the singular value decomposition (SVD) of a non-square matrix [1,2,3] plays a key role in a number of applications (see, for instance, [4,5,6,7,8,9,10,11,12,13]); among them, it is worth citing applications in automatic control [14], digital circuit design [15], time-series prediction [16], and image processing [17,18]. Efforts have been devoted to achieving the computation of the SVD of matrices in the artificial neural networks community [19,20,21]. The neural algorithms utilized in the present research work to learn an SVD of a data matrix were developed by Helmke and Moore (HM) in [1]. An HM learning rule appeared under the form of two continuous-time differential equations over the special orthogonal group of matrices. The purpose of the present research is to develop and compare different numerical schemes, under the form of two alternating learning equations, to learn the SVD of large matrices on the basis of the HM learning paradigm. The numerical schemes developed here are both first-order (Euler-like) and second-order

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call