Abstract

ℓ1 regularization has been used for logistic regression to circumvent the overfitting and use the estimated sparse coefficient for feature selection. However, the challenge of such regularization is that the ℓ1 regularization is not differentiable, making the standard convex optimization algorithm not applicable to this problem. This paper presents a simple projection neural network for ℓ1-regularized logistics regression. In contrast to many available solvers in the literature, the proposed neural network does not require any extra auxiliary variable nor smooth approximation, and its complexity is almost identical to that of the gradient descent for logistic regression without ℓ1 regularization, thanks to the projection operator. We also investigate the convergence of the proposed neural network by using the Lyapunov theory and show that it converges to a solution of the problem with any arbitrary initial value. The proposed neural solution significantly outperforms state-of-the-art methods concerning the execution time and is competitive in terms of accuracy and AUROC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call