Abstract

In previous contributions, the second author of this paper presented a new class of algorithms for orthonormal learning of linear neural networks with p inputs and m outputs, based on the equations describing the dynamics of a massive rigid frame on the Stiefel manifold. These algorithms exhibit good numerical stability, strongly binding to the sub-manifold of constraints, and good controllability of the learning dynamics, but are not completely satisfactory from a computational-complexity point of view. In the recent literature, efficient methods of integration on the Stiefel manifold have been proposed by various authors, see for example (Phys. D 156 (2001) 219; Numer. Algorithms 32 (2003) 163; J. Numer. Anal. 21 (2001) 463; Numer. Math. 83 (1999) 599). Inspired by these approaches, in this paper, we propose a new and efficient representation of the mentioned learning equations, and a new way to integrate them. The numerical experiments show how the new formulation leads to significant computational savings especially when p≫ m. The effectiveness of the algorithms is substantiated by numerical experiments concerning principal subspace analysis and independent component analysis. These experiments were carried out with both synthetic and real-world data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call