Abstract

Recent technological progress has expanded high-dimensional datasets. This phenomenon along with irrelevant and redundant features is led to a challenging feature selection process. The major objective of feature selection is to decrease the dimensions of such datasets by eliminating non-essential and irrelevant features resulting in an improvement in the performance of learning algorithms. An existing major challenge is that most of the feature selection methods intend to select a global feature group that is employed across the whole sample space. As each region of the sample space, with a special set of features, responds to the patterns correctly, the global feature selection methods are not efficient. A novel scheme of localized feature selection is presented in this paper, in which by parallel processing and distribution of data, classification is performed. In the proposed method, each region of the sample space is associated with its own distinct optimized feature set. Afterward, classification is applied to classifiers learned from each group. The simulation results on several real-world datasets on multiple classifiers demonstrate that the proposed local feature selection is superior to global feature selection methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.