Abstract

Abstract Generative score spaces have recently received increasing attention due to their state-of-the-art performance in a wide range of recognition tasks. These methods model the distribution of the training data using probabilistic generative models and derive the feature for each sample based on the generative models. The derived feature encodes the information of the sample, hidden variables and model parameters for classification, providing a staged way to integrate the abilities of generative models in inferring hidden information and discriminative models in classification. The underlying point is that the hidden information carried by hidden variables in generative models is informative and useful in classification. In this paper, we propose a general extension for the existing score space methods to exploit class label that encodes rich discriminative information, when deriving feature mappings. This is achieved by extending the regular generative models to class conditional models over both observed variable and class label, and deriving feature mapping over such extended models. The resulted methods take simple and intuitive forms which are weighted versions of existing methods, benefitting from the Bayesian inference of class label. The empirical evaluation over two typical generative models and 6 datasets shows its significant improvement over existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.