Abstract

This chapter introduces a stochastic feature subset selection (SFS) technique that is implemented via a parallel set of artificial neural networks. The initial step for SFS is the random assignment of the samples from the original dataset into a design set and validation set. Once the design phase is complete, the latter set is used to externally validate the classification performance. The design set is further divided into training and monitoring sets. During the design phase, the classifier uses the training set to generate classification coefficients, and the monitoring set is subsequently used to assess the performance of the coefficients. The overall training and monitoring accuracy is measured taking into account a weighted average across both sets as well across each class. During an SFS run, the performance of each classification task is assessed using the selected fitness function. When the fitness exceeds the histogram fitness threshold, which is set to some value less than the fitness threshold stopping criterion, the histogram is incremented at those feature indices corresponding to the regions used by the classification task. This histogram is then used to generate a cumulative distribution function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call