Abstract

Over the past few decades, numerous optimization-based methods have been proposed for solving the classification problem in data mining. Classic optimization-based methods do not consider attribute interactions toward classification. Thus, a novel learning machine is needed to provide a better understanding on the nature of classification when the interaction among contributions from various attributes cannot be ignored. The interactions can be described by a non-additive measure while the Choquet integral can serve as the mathematical tool to aggregate the values of attributes and the corresponding values of a non-additive measure. As a main part of this research, a new nonlinear classification method with non-additive measures is proposed. Experimental results show that applying non-additive measures on the classic optimization-based models improves the classification robustness and accuracy compared with some popular classification methods. In addition, motivated by well-known Support Vector Machine approach, we transform the primal optimization-based nonlinear classification model with the signed non-additive measure into its dual form by applying Lagrangian optimization theory and Wolfes dual programming theory. As a result, 2n – 1 parameters of the signed non-additive measure can now be approximated with m (number of records) Lagrangian multipliers by applying necessary conditions of the primal classification problem to be optimal. This method of parameter approximation is a breakthrough for solving a non-additive measure practically when there are relatively small number of training cases available (mn-1). Furthermore, the kernel-based learning method engages the nonlinear classifiers to achieve better classification accuracy. The research produces practically deliverable nonlinear models with the non-additive measure for classification problem in data mining when interactions among attributes are considered.

Highlights

  • Classic optimization-based methods formulate classification problems by modeling data with standard optimization techniques using objectives and constraints

  • The interactions can be described by a non-additive measure while the Choquet integral can serve as the mathematical tool to aggregate the values of attributes and the corresponding values of a non-additive measure

  • Experimental results show that applying non-additive measures on the classic optimization-based models improves the classification robustness and accuracy compared with some popular classification methods

Read more

Summary

Introduction

Classic optimization-based methods formulate classification problems by modeling data with standard optimization techniques using objectives and constraints. SVM separates data nonlinearly by introducing so-called nonlinear kernel functions. These optimization-based methods separate data linearly or nonlinearly, they do not consider contributions from the interaction among attributes. We investigate the direction of constructing nonlinear objectives by developing kernel functions in nonlinear classification models, a technique taken by SVM. The rest of this paper is organized as follows: In Section 2, an overview of classic optimization-based classification methods is provided.

Preliminary
Non-Additive Measures
Definition of Non-Additive Measures
Choquet Integral
Optimization-Based Nonlinear Classifiers with Non-Additive Measures
Quadratic Non-Additive Optimization-Based Classification
Nonlinear Classifier with the Non-Additive Measure
Applications
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call