Abstract

A continuous time Markov chain is observed with Gaussian white noise added to it. To the well-known problem of continuously estimating the current state of the chain, we introduce the additional option of continuously varying the sampling rates, as long as some restriction (or cost) on the average sampling rate is satisfied. The optimal solution to this "dynamic sampling" problem is presented and analyzed in closed form for the two-state symmetric case. It is shown that the resulting dynamic sampling procedure has a much lower asymptotic average error rate compared to the one obtained when sampling at a constant rate. Alternatively, the dynamic sampling procedure can provide the same error rate using a much lower average sampling rate. The relative efficiency of the dynamic sampling procedure may in fact tend to infinity in some extreme cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call