This work is concerned with rates of convergence of Markov chain approximation methods for controlled switching diffusions. The cost function is defined on an infinite horizon with stopping times and without discount. Displaying both continuous dynamics and discrete events, the discrete events are modeled by continuous-time Markov chains to delineate a random environment and other random factors that cannot be represented by diffusion processes. This paper presents a first attempt using a probabilistic approach to treat such rates of convergence problems. In addition, in contrast to the significant developments in the literature using partial differential equation (PDE) methods for the approximation of controlled diffusions, there do not yet appear to be any PDE results to date for rates of convergence of numerical solutions for controlled switching diffusions, to the best of our knowledge. Although some of the working conditions in this paper such as the one-dimensional continuous state variable, nondegenerate diffusions, and control only on the drift may be seemingly strong, they are adequate as the starting point for using this new approach to treat the rates of convergence problems. Moreover, in the literature, to prove the convergence using Markov chain approximation methods for control problems involving cost functions with stopping (even for uncontrolled diffusion without switching), an added assumption was used to avoid the so-called tangency problem. As a by-product of our approach, by modifying the value function, it is demonstrated that the anticipated tangency problem will not arise in the sense of convergence in probability and in the sense of $L^1$.
Read full abstract