Abstract

From the perspective of probability framework, the generative model first learns the joint probability distribution from the data and then calculates the conditional probability distribution with a faster convergence speed. However, the discriminative model learns the conditional probability distribution directly from the data, thus often demonstrating higher accuracy. As an effective combination of the generative and discriminative models, the generative-discriminative hybrid model integrates their advantages. However, the existing methods must divide the original features into two independent feature spaces to train the two models. Feature division not only increases the time complexity of the model but also weakens the expression ability of the original feature space. To solve this problem, this paper proposes a feature augmentation-based method for constructing the generative-discriminative hybrid model. First, this novel method uses the generative model to learn the conditional probability distribution. Then, it augments the learned conditional probability distribution as new features into the original feature space. Finally, it trains the discriminative model in the augmented feature space and predicts the final classification result. The new method offers several advantages, including using feature augmentation to mix the models, not requiring feature division, exhibiting low time complexity, and enhancing the expression ability of the original feature space. The experimental results on 36 classical UCI benchmark datasets show that the new method is not only effective and universal but also follows bias-variance trade-off.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call