Abstract

The L1-regularized logistic regression (L1-LR) is popular for classification problems. To accelerate its training speed for high-dimensional data, techniques named safe screening rules have been proposed recently. They can safely delete the inactive features in data so as to greatly reduce the training cost of L1-LR. The screening power of these rules is determined by their corresponding safe regions, which is also the core technique of safe screening rules. In this paper, we introduce a new safe feature elimination rule (SFER) for L1-LR. Compared to existing safe rules, the safe region of SFER is improved in two aspects: (1) a smaller sphere region is constructed by using the strong convexity of dual L1-LR twice; (2) multiple half-spaces, which correspond to the potential active constraints, are added for further contraction. Both improvements can enhance the screening ability of SFER. As for the complexity of SFER, an iterative filtering framework is given by decomposing the safe region into multiple "domes". In this way, SFER admits a closed form solution and the identified features will not be scanned repeatedly. Experiments on ten benchmark data sets demonstrate that SFER gives superior performance than existing methods on training efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call