Abstract
Dynamic selection (DS) of classifiers have been explored by researchers due to their overall ability to obtain higher accuracy on low-sample data sets when compared majority voting. Little literature, however, has employed DS to high-dimensional data sets with substantially more features than samples. Since, several studies have reported the benefits of applying feature selection methods to high-dimensional data sets, raised the following open research questions: 1. How DS methods perform for such data sets? 2. Do they perform better than majority voting? and 3. Does feature selection as a pre-processing step improve their performance? The performance of 21 DS methods was statistically compared against the performance of majority voting on 10 high-dimensional data sets and with a filter feature selection method. We found that majority voting is among the best ranked classifiers and none of the DS methods perform statistically better than it with and without feature selection. Moreover, we demonstrated that feature selection does improve the performance of DS methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.