Abstract

Dimensionality-reducing techniques such as gene selection have become commonplace in order to reduce the high dimensionality found within bioinformatics datasets such as DNA microarray datasets. The degree of dimensionality is reduced by identifying and removing redundant and irrelevant features or genes and leaving only an optimum subset of features for subsequent analysis. However, a number of feature selection techniques show poor stability (resistance to change in the underlying data). One approach for increasing the stability of feature subsets is ensemble feature selection. This is performed first by generating multiple ranked gene lists and then aggregating the results using an aggregation function. While research has been performed on ensemble feature selection and its effect on gene list stability, there has been little research on an important choice made in the process of ensemble feature selection: the number of iterations (or repetitions) of feature selection. The computation time of ensemble feature selection is greatly affected by the number of ranked lists generated: the higher the number of iterations, the more computation time is required. To study this, we evaluate the similarity among feature subsets generated from two different approaches to ensemble feature selection (data diversity and hybrid approach). We calculate the similarity between the final ranked lists generated using 10, 20 and 50 iterations, using the mean aggregation function. Our results show that the similarity between 20 and 50 iterations is high enough for us to recommend using 20 iterations instead of 50 and thus saving the large amount of computation time required for 50 iterations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call