Abstract
We investigate the importance sampling (IS) simulation for the sample average of an output sequence from an irreducible Markov chain. The optimal Markov chain used in simulation is known to be a twisted Markov chain, however, the previous proofs are very complicated and do not give us a good perspective. We give a simple and natural proof for the optimality of the simulation Markov chain in terms of the Kullback-Leibler (KL) divergence of Markov chains. The performance degradation of the IS simulation by using a not optimal simulation Markov chain, i.e., the difference between the obtained variance and the minimum variance is shown to be represented by the KL divergence. Moreover, we show a geometric relationship between a simulation Markov chain and the optimal one.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.