Abstract

In multi-dimensional classification (MDC), the semantics of objects are characterized by multiple class spaces from different dimensions. Most MDC approaches try to explicitly model the dependencies among class spaces in output space. In contrast, the recently proposed feature augmentation strategy, which aims at manipulating feature space, has also been shown to be an effective solution for MDC. However, existing feature augmentation approaches only focus on designing holistic augmented features to be appended with the original features, while better generalization performance could be achieved by exploiting multiple kinds of augmented features. In this paper, we propose the selective feature augmentation strategy that focuses on synergizing multiple kinds of augmented features. Specifically, by assuming that only part of the augmented features is pertinent and useful for each dimension’s model induction, we derive a classification model which can fully utilize the original features while conduct feature selection for the augmented features. To validate the effectiveness of the proposed strategy, we generate three kinds of simple augmented features based on standard kNN, weighted kNN, and maximum margin techniques, respectively. Comparative studies show that the proposed strategy achieves superior performance against both state-of-the-art MDC approaches and its degenerated versions with either kind of augmented features.

Highlights

  • Traditional supervised learning tasks usually characterize the semantics of objects with one output variable, i.e., single-output learning, among which multi-class classification is one of the most important learning frameworks

  • In some real-world applications, it is better to use multiple output variables to characterize the rich semantics of objects, which results in the problem of multi-output learning[1]

  • Under the multi-dimensional classification (MDC) setting, each object is represented by a single instance while associated with multiple class variables, each corresponding to a specific class space characterizing the object′s semantics along one specific dimension

Read more

Summary

Introduction

Traditional supervised learning tasks usually characterize the semantics of objects with one output variable, i.e., single-output learning, among which multi-class classification is one of the most important learning frameworks. It is obvious that the MDC problem can be solved dimension by dimension, i.e., training a multi-class classifier for each class space This independent decomposition strategy does not consider potential dependencies among class spaces which might impact the generalization performance of the resulting model. Feature augmentation strategy, which aims at manipulating feature space, has been shown as an effective solution for MDC This strategy enriches the original feature space with a set of new features that are generated by making use of some well-established techniques, e.g., kNN[20] or deep learning[21]. The strategy is abbreviated as SFAM, i.e., selective feature augmentation for multi-dimensional classification, in the following parts of this paper for brevity.

Related work
Technical details of SFAM
Experiments
Experimental setup
2) Evaluation metrics
Experimental results
Evaluation metric
Further analysis
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.