Abstract

We propose an embedded/integrated feature selection method based on neural networks with Group Lasso penalty. Group Lasso regularization is considered to produce sparsity on the inputs to the network, i.e., for selection of useful features. Lasso based feature selection using a multi-layer perceptron usually requires an additional set of weights, while our Group Lasso formulation does not require that. However, Group Lasso penalty is non-differentiable at the origin. This may lead to oscillations in numerical simulations and make it difficult to analyze theoretically. To address this issue, four smoothing Group Lasso penalties are introduced. A rigorous proof for the convergence of the proposed algorithm is presented under suitable assumptions. To verify the effectiveness, a three-step algorithmic architecture is adopted in implementation. Experimental results on several datasets validate the theoretical results and demonstrate the competitive performance of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call