Abstract

There are two popular families of statistical models for dealing with sequences and in particular with handwriting signals, either on-line or off-line, the well known generative hidden Markov models and the more recently proposed discriminative Hidden Conditional Random Fields.One key issue in such modeling frameworks is to efficiently handle variability. The traditional approach consists in first removing as much as possible signal variability in the preprocessing stage, and to use more complex models, for instance in the case of hidden Markov models one increases the number of states and the Gaussian mixture size.We focus here on another kind of approaches where the probability distribution implemented by the models depends on a number of additional contextual variables, that are assumed fixed or that vary slowly along a sequence. The context may stand for emotion features in speech recognition, physical features in gesture recognition, gender, age, etc.We propose a framework for deriving markovian models that make use of such contextual information. This yields new models that we call Contextual hidden Markov models and contextual Hidden Conditional Random Fields. We detail learning algorithms for both models and investigate their performances on the IAM off-line handwriting dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.