The simulation run length required to achieve desired statistical precision for a sample mean in a steady-state stochastic simulation experiment is largely determined by the asymptotic variance of the sample mean and, to a lesser extent, by the second-order asymptotics of the variance and the asymptotic bias. The asymptotic variance, the second-order asymptotics of the variance, and the asymptotic bias of the sample mean of a function of an ergodic Markov process can be expressed in terms of solutions of Poisson's equation, as indicated by positive recurrent potential theory. We review this positive recurrent potential theory, giving special attention to continuous-time Markov chains. We provide explicit formulas for birth-and-death processes and diffusion processes, and recursive computational procedures for skip-free chains. These results can be used to help design simulation experiments after approximating the stochastic process of interest by one of the elementary Markov processes considered here.