Abstract

In this paper, we present a new approach to minimum classification error (MCE) training of pattern classifiers with quadratic discriminant functions. First, a so-called sample separation margin (SSM) is defined for each training sample and then used to define the misclassification measure in MCE formulation. The computation of SSM can be cast as a nonlinear constrained optimization problem and solved efficiently. Experimental results on a large-scale isolated online handwritten Chinese character recognition task demonstrate that SSM-based MCE training not only decreases the empirical classification error, but also pushes the training samples away from the decision boundaries, therefore a good generalization is achieved. Compared with conventional MCE training, an additional 7% to 18% relative error rate reduction is observed in our experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call