Abstract

In image classification, traditional kernels or feature mapping functions of Support Vector Machine(SVM) use discriminative features without considering the true nature of the data. Our work in this paper is motivated by the need to consider intrinsic distribution of L1 normalized histograms and develop a flexible feature mapping technique by combining histogram based features and distribution based density features. The proposed mapping technique contains prior knowledge about the the data which provides a flexible representation and thus increases the discriminative power of the classifier. Such flexibility is achieved due to the explanatory capabilities of Dirichlet, generalized Dirichlet and Beta-Liouville distributions to model proportional data. In addition to that, we present a general framework to estimate the parameters of these distributions by taking maximum likelihood (MLE) approach. Experimental results show that the proposed technique increases the effectiveness of SVM kernels for different computer vision tasks such as natural scene recognition, satellite image classification and human action recognition in videos.

Highlights

  • Appropriate and accurate representation of the data for classification models is one of the existing problems in machine learning

  • Similar to [5], for image classification best score is reported for each kernel and for action recognition, average score with standard deviation are reported for all kernels

  • Since our approach is to perform feature mapping after combining discriminative features with distribution based features, we assume that the feature pair similarity values in similarity matrix for generalized Dirichlet distribution are hard to separate after solving the dual form

Read more

Summary

INTRODUCTION

Appropriate and accurate representation of the data for classification models is one of the existing problems in machine learning. A popular image representation is the Bag of Visual Words (BoVW) which is essentially quantizing similar patches of an image to the corresponding cluster center which is known as codebook [2], [3] Modelling such data after normalization in a probabilistic manner needs to satisfy the constraints of non-negativity and unit sum. Dirichlet, Generalized Dirichlet and Beta-Liouville distributions can model this type of data to get the prior information which can be used as a feature. Hyperparameters of the classifiers work as prior information Another approach is to select features that convey most relevant information regarding the data or the task. For SVM, input data are represented as points in high dimensional space This representation needs to be linearly separable to make the model work properly.

RELATED WORKS
GENERALIZED DIRICHLET DISTRIBUTION
FEATURE MAPPING
Optimization
EXPERIMENTAL RESULTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.