Abstract

This paper discusses the classifier ensemble problem with sparsity and diversity learning, which is a central issue in machine learning. The current approach for reducing the size and increasing the accuracy of a classifier ensemble is to formulate it as a convex quadratic programming problem, which is a relaxation problem, and then solve it by using the existing methods for convex quadratic programming or by computing closed-form solutions. This paper presents a novel computational approach for solving the classifier ensemble problem with sparsity and diversity learning without any recourse to relaxation problems and their associated methods. We first show that the classifier ensemble problem can be expressed as a minimization problem for the sum of certain convex functions over the intersection of fixed point sets of quasi-nonexpansive mappings. Next, we propose fixed point optimization algorithms for solving the minimization problem and show that the algorithms converge to the solution of the minimization problem. It is shown that the proposed algorithms can directly solve the classifier ensemble problem with sparsity and diversity learning. Finally, we compare the performance of the proposed sparsity and diversity learning methods against an existing method in classification experiments using data sets from the UCI machine learning repository and the LIBSVM. The experimental results show that the proposed methods have higher classification accuracies than the existing method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call