Abstract

Sparse modeling has been highly successful in many real-world applications. While a lot of interests have been on convex regularization, recent studies show that nonconvexregularizers can outperform their convex counterparts in many situations.However, the resulting nonconvex optimization problems are often challenging, especiallyfor composite regularizers such as the nonconvex overlapping group lasso. In thispaper, byusing a recent mathematical tool known as the proximal average,we propose a novel proximal gradient descent method for optimization with a wide class of nonconvex and composite regularizers.Instead of directlysolving the proximal stepassociated with a composite regularizer, we average thesolutions from the proximal problems of the constituent regularizers. This simple strategy has guaranteed convergenceand low per-iteration complexity.Experimental results on a number of synthetic andreal-world data sets demonstrate the effectiveness and efficiency of theproposed optimization algorithm, and also the improved classification performanceresulting from thenonconvex regularizers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call