Abstract

In this paper we study the problem of approximating a Markov process with a large number of states by another one with fewer states. While there are many ways to do this, we focus on a method known as ‘aggregation’, that is, combining states of the original Markov process in an attempt to reduce the size of the state space. The problem of ‘optimal’ aggregation is formulated as one of minimizing the relative entropy rate between the two Markov processes, using the results in a companion paper. An exact solution to the optimization problem is intractable, as it requires one to solve a combinatorial optimization problem. As an alternative, we present a heuristic that is quite feasible and produces sub-optimal aggregations. The same approach is then extended to hidden Markov models. At present there is no closed-form formula for the relative entropy rate (or Kullback-Leibler) divergence rate between two hidden Markov processes. Hence we study instead the joint state and output process, which is Markov, and aggregate that instead. In such an approach, the resulting aggregated process is in general not a joint process, and even if it is, it is not a hidden Markov process. So we suggest a procedure for ensuring that the aggregated process is indeed a hidden Markov process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call