Abstract

In this paper we consider the problem of classifying a sequence of observations among two known hidden Markov models (HMMs). We use a classifier that minimizes the probability of error (i.e., the probability of misclassification), and we are interested in assessing its performance by computing the a priori probability of error (before any observations are made). This probability (that the classifier makes an incorrect decision) can be obtained, as a function of the length of the sequence of observations, by summing up the probability of misclassification over all possible observation sequences, weighted by their corresponding probabilities. In an effort to avoid the high complexity associated with the computation of the exact probability of error, we establish an upper bound on the probability of error, and we find the necessary and sufficient conditions for this bound to tend to zero exponentially with the number of observation steps. We focus on classification among two HMMs that have the same language, which is the most difficult case to characterize; our approach can easily be applied to classification among any two arbitrary HMMs. The bound we obtain can also be used to approximate the dissimilarity between the two given HMMs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.