Abstract
Over the past decades, the radial basis function network (RBFN) has attracted extensive attention due to its simple network structure and powerful learning ability. Meanwhile, regularization methods have been intensively applied in RBFNs to enhance the performance of networks. A common regularization method is the ℓ2 regularization, which improves the stability and generalization ability but leads to dense networks. Another common regularization method is to employ the ℓ1 regularization that can successfully improve the sparsity of RBFN. The better strategy is to use the elastic-net regularization that combines both ℓ2 and ℓ1 regularization to improve stability and sparsity simultaneously. However, in multi-classification tasks, even the elastic-net regularization can only prune the redundant weights of nodes and cannot ensure the sparsity at the node level. In this paper, we propose a generalized sparse RBFN (GS-RBFN) based on the extended elastic-net regularization to handle multi-classification problems. By using the extended elastic-net regularization that integrates the Frobenius norm and L2,1 norm, we accomplish the stability and sparsity of RBFN for multi-classification problems, of which the binary classification problem is only a special case. In order to improve the training efficiency under large-scale tasks, we further propose the parallel GS-RBFN (PGS-RBFN) with the matrix inversion lemma to accelerate the intensive computation. The alternating direction method of multipliers (ADMM) and its consensus variant are applied to train our proposed models, and we demonstrate their convergence in solving corresponding optimization problems. Experimental results on multi-classification datasets illustrate the effectiveness and advantages of our algorithms in accuracy, sparsity and convergence.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.