Abstract
Feature selection and feature reduction are central problems in machine learning and pattern recognition. Many datasets have a sparse nature, that is, many features have zero value. For instance, in text classification based on the bag-of-words (BoW) or similar representations, there is usually a large number of features, many of which may be irrelevant (or even detrimental) for classification tasks. This paper proposes a new unsupervised feature selection method for sparse data, suitable for both standard and binarized representations. The method is applicable to supervised, semi-supervised, and unsupervised learning, since it does not use class labels. The experimental results on standard benchmarks show that the proposed method performs better than existing ones on numeric floating-point and binary feature. It yields efficient feature selection, reducing the number of features while simultaneously improving the classification accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.