The use of probabilistic ideas by applied mathematicians has seen a continued increase in recent decades. Probability now appears frequently at the modeling stage. There is widespread interest in investigating the effects of noise and uncertainty. Probabilistic algorithms are routinely applied with much success to the solution of deterministic problems (think of Monte Carlo quadrature or stochastic gradient descent). Markov chains give a simple, widely used tool to describe systems that evolve randomly. They are also a very popular topic in applied mathematics courses. At times \(0\), \(1\), \(2\), łdots, a Markov chain jumps from its current state \(x\) to a randomly chosen new state \(y\); the set of possible states is discrete. The requirement that the jumping times be uniformly spaced is a clear limitation in many situations, and this shortcoming is avoided by considering continuous-time Markov chains. In the continuous-time setting, if the chain has reached state \(x\) at time \(t_i\), it will jump to the randomly chosen \(y\) at time \(t_i+1=t_i+\tau_i\), where the waiting time \(\tau_i>0\) is itself random. Markov chains in continuous time are featured in many applications, including chemistry, ecology, and epidemiology. In a chemical system containing \(n\) species \(S_1\), łdots, \(S_n\), each state corresponds to vector \((c_1\), łdots, \( c_n)\) where \(c_j\) is the number of molecules of species \(S_j\). The species may take part in a number of chemical reactions, let us say \(S_1+2S_2 \rightarrow S_3\), \(2S_1+3S_3 \rightarrow S_4+2S_6\), and so on. At random times, one or another of the \(m\) possible reactions will take place and the state will change. Such a stochastic, molecular approach typically provides a more accurate description of the system than deterministic treatments where the concentrations of the different species are regarded as continuous variables that evolve according to a set of differential equations. This is particularly true in systems where, for some species, the number \(c_j\) is low, as is the case in many biological reactions. The following Survey and Review paper, “Stationary Distributions of Continuous-Time Markov Chains: A Review of Theory and Truncation-Based Approximations,” has been written by Juan Kuntz, Philipp Thomas, Guy-Bart Stan, and Mauricio Barahona. Section 2 provides a reader-friendly introduction to continuous-time Markov chains and their stationary distributions; these are important because they determine the long-time behavior of the chain. A salient future is that the authors present, in an accessible way, results that do not assume the chain to be irreducible (roughly speaking, irreducibility means that the chain is not the juxtaposition of two or more smaller chains that do not talk to each other; irreducibility simplifies the math, but is not a reasonable hypothesis in some applications). After that, the authors concentrate on how to compute invariant distributions. The paper contains numerical results, an extensive bibliography, and a detailed discussion of open problems.