Abstract

A random vector functional-link (RVFL) network is a neural network composed of a randomised hidden layer and an adaptable output layer. Training such a network is reduced to a linear least-squares problem, which can be solved efficiently. Still, selecting a proper number of nodes in the hidden layer is a critical issue, since an improper choice can lead to either overfitting or underfitting for the problem at hand. Additionally, small sized RVFL networks are favoured in situations where computational considerations are important. In the case of RVFL networks with a single output, unnecessary neurons can be removed adaptively with the use of sparse training algorithms such as Lasso, which are suboptimal for the case of multiple outputs. In this paper, we extend some prior ideas in order to devise a group sparse training algorithm which avoids the shortcomings of previous approaches. We validate our proposal on a large set of experimental benchmarks, and we analyse several state-of-the-art optimisation techniques in order to solve the overall training problem. We show that the proposed approach can obtain an accuracy comparable to standard algorithms, while at the same time resulting in extremely sparse hidden layers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call