Abstract
Observable operator models (OOMs) generalize hidden Markov models (HMMs) and can be represented in a structurally similar matrix formalism. The mathematical theory of OOMs gives rise to a family of constructive, fast, and asymptotically correct learning algorithms, whose statistical efficiency, however, depends crucially on the optimization of two auxiliary transformation matrices. This optimization task is nontrivial; indeed, even formulating computationally accessible optimality criteria is not easy. Here we derive how a bound on the modeling error of an OOM can be expressed in terms of these auxiliary matrices, which in turn yields an optimization procedure for them and finally affords us with a complete learning algorithm: the error-controlling algorithm. Models learned by this algorithm have an assured error bound on their parameters. The performance of this algorithm is illuminated by comparisons with two types of HMMs trained by the expectation-maximization algorithm, with the efficiency-sharpening algorithm, another recently found learning algorithm for OOMs, and with predictive state representations (Littman & Sutton, 2001 ) trained by methods representing the state of the art in that field.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.