Abstract

Incremental feature selection can improve learning of accumulated data. We focus on incremental feature selection based on rough sets, which along with their generalizations (e.g., fuzzy rough sets), reduce dimensionality without requiring domain knowledge, such as data distributions. By analyzing the basic concepts of fuzzy rough sets on incremental datasets, we propose incremental mechanisms of information measure. Moreover, we introduce a key instance set containing representative instances to select supplementary features when new instances arrive. As the key instance set is much smaller than the whole dataset, the proposed incremental feature selection mostly suppresses redundant computations. We experimentally compare the proposed method with various non-incremental and two state-of-the-art incremental methods on a variety of datasets. The comparison results demonstrate that the proposed method achieves compact results with reduced computation time, especially on high-dimensional datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.