Abstract

Since the early days of digital communication, hidden Markov models (HMMs) have now been routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. An HMM (Xi, Yi)i≥1 assumes observations X1, X2, . . . to be conditionally independent given an “explanatary” Markov process Y1, Y2, . . ., which itself is not observed; moreover, the conditional distribution of Xi depends solely on Yi. Central to the theory and applications of HMM is the Viterbi algorithm to find a maximum a posteriori estimate q1:n = (q1, q2, . . . , qn) of Y1:n given the observed data x1:n. Maximum a posteriori paths are also called Viterbi paths or alignments. Recently, attempts have been made to study the behavior of Viterbi alignments of HMMs with two hidden states when n tends to infinity. It has indeed been shown that in some special cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions. This work proves the existence of infinite Viterbi alignments for virtually any HMM with two hidden states.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call