Abstract

One of the recent popular discriminative training methods, Minimum Classification Error (MCE) training, aims at efficiently developing high-performance classifiers through the minimization of smooth (differentiable in classifier parameters) classification error count loss. The smoothness enables one to use handy gradient-based minimization methods such as the probabilistic descent method. However, the gradient-based methods do not guarantee global minimization; what they pursue is basically local minimization. This locality may hinder one in exploring the achievable performance of the MCE training. To alleviate this problem, we apply one of the global optimization methods, Real-Coded Genetic Algorithms (RCGA), to the MCE training, and investigate its effectiveness experimentally. From the results, we show that the effects of the RCGA-based MCE training are limited and the conventional MCE training using the probabilistic descent method is better suited to classifier development based on the minimization of the smooth classification error count loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call