Abstract

Training neural networks with the Moore–Penrose (MP) inverse has recently gained attention in view of its noniterative training nature. However, a significant drawback of learning based on the MP inverse is that the computational memory consumption grows along with the size of a dataset. In this article, based on the partitioning of the MP inverse, we propose a blockwise recursive MP inverse formulation (BRMP) for network learning with low-memory property while preserving its training effectiveness. The BRMP is an equivalent formulation to its batchwise counterpart since neither approximation nor assumption is made in the derivation process. Our further exploration of this recursive method leads to a switching structure among three different scenarios. This structure also reveals that the well-known recursive least squares method is a special case of our proposed technique. Subsequently, we apply BRMP to the training of radial basis function networks as well as multilayer perceptrons. The experimental validation covers both regression and classification tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.