Abstract

In this work, we propose a paradigm for constructing a sparsely-connected multi-layer perceptron (MLP). Using Orthogonal Least Squares (OLS) method for training, the proposed method prunes the hidden units and output weights based on their usefulness to design a sparsely connected MLP. We formulate second order algorithm to obtain the closed-form expression for hidden unit learning factors thereby minimizing hand-tuned parameters. The usefulness of the proposed algorithm is further substantiated by its ability to differentiate two combined datasets. Using widely available datasets, the proposed algorithm's 10-fold testing error is shown to be less than that of several other algorithms. Inducing sparsity into a fully-connected neural network, pruning of the hidden units, Newton's method for optimization, and orthogonal least squares are the subject matter of the present work.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.