Abstract

Minimum Classification Error (MCE) training, which has been widely used as one of the recent standards of discriminative training for classifiers, is characterized by a smooth sigmoidal-form classification error count loss. The smoothness of this loss function effectively increases training robustness to unseen samples, well approximates the ultimate, minimum classification error probability status, and leads to accurate classification over unseen samples. However, few rational methods have been developed for controlling the smoothness, which is often determined through many repetitions of the experimental setting; this empirical approach has been a disincentive to the increased popularization of MCE training. To alleviate this long-standing problem, we propose a new MCE training method that automatically determines loss smoothness. The proposed method is based on Parzen-estimation-based MCE re-formalization, and the loss smoothness degree is determined so that Parzen distribution can be an accurate approximation to the unknown true distribution, whose positive-domain integration corresponds to classification error probability, in one-dimensional misclassification measure space. Through systematic experiments, we show that the proposed method efficiently yields a classification accuracy that nearly matches the best accuracy obtained by the conventional, trial-and-error-mode repetitions of smoothness setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call