Abstract

Confidence scoring can assist in determining how to use imperfect handwriting-recognition output. We explore a confidence-scoring framework for post-processing recognition for two purposes: deciding when to reject the recognizer's output, and detecting when to change recognition parameters e.g., to relax a word-set constraint. Varied confidence scores, including likelihood ratios and posterior probabilities, are applied to an Hidden-Markov-Model (HMM) based on-line recognizer. Receiver-operating characteristic curves reveal that we successfully reject 90% of word recognition errors while rejecting only 33% of correctly-recognized words. For isolated digit recognition, we achieve 90% correct rejection while limiting false rejection to 13%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call