Abstract

The performance of kernel-based feature transformation methods depends on the choice of kernel function and its parameters. In addition, most of these methods do not consider the classification information and error for the mapping features. In this paper, we propose to determine a kernel function for kernel principal components analysis (KPCA) and kernel linear discriminant analysis (KLDA), considering the classification information. To this end, we combine the conventional kernel functions using genetic algorithm and genetic programming in linear and non-linear forms, respectively. We use the classification error and the mutual information between features and classes in the kernel feature space as evolutionary fitness functions. The proposed methods are evaluated on the basis of the University of California Irvine (UCI) datasets and Aurora2 speech database. We evaluate the methods using clustering validity indices and classification accuracy. The experimental results demonstrate that KPCA using a nonlinear combination of kernels based on genetic programming and the classification error fitness function outperforms conventional KPCA using Gaussian kernel and also KPCA using linear combination of kernels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call