Abstract

A major area of research is biomarker discovery using gene expression data. Such data is huge and often needs to be classified into classes or clustered, using different machine learning techniques, for further analysis. An important preprocessing step is feature selection (FS) and different such methods have been devised. However, applying different FS techniques to the same dataset do not always produce the same results. In this work, the robustness of FS methods will be looked into. Robustness is defined here as the stability of a given gene pool with respect to the data and the FS method used. Our approach is to investigate the resulting feature subset obtained when running diverse FS methods on different gene expression datasets. As a first step, 10 FS methods were executed using 2 different datasets. Based on the results obtained, 2 of these methods were further investigated using 10 different datasets. The effects of selecting an increasing number of features on the percentage similarity inter-methods were also studied. Our results show that the studied methods exhibit a high amount of variability in the resulting feature subset. The selected feature subsets differed both inter-methods and intra-methods for different datasets. The reason behind this is not clear and possible objective assessment on the ideal (best) subset should be further investigated.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.