Abstract

In our earlier work [1] on the problem of parameter learning in pattern recognition, it was found that estimates converged to nontrue values in the presence of labeling errors. The present work describes a possible remedy to this problem by rejecting those training samples that do not lie within a certain neighborhood of the current estimate of the mean. The convergence of this class of restrictive updating procedure in presence of wrong samples has been studied along with the comparison of its estimates to those in [1]. It is established that, in the presence of labeling errors, the estimates of the proposed restrictive updating procedure are always asymptotically closer to the respective true values than the estimates in [1], provided that certain conditions are satisfied. A set of three-class bivariate data and speech data are also used to demonstrate the above features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call