In, “Near-Optimal Adaptive Policies for Serving Stochastically Departing Customers,” Segev considers a multistage stochastic optimization problem originally introduced by Cygan et al. [Cygan M, Englert M, Gupta A, Mucha M, Sankowski P (2013) Catch them if you can: How to serve impatient users. Proc. 4th Innovations Theoretical Comput. Sci. Conf., 485–494], studying how a single server should prioritize stochastically departing customers. In this setting, the objective is to determine an adaptive service policy that maximizes the expected total reward collected along a discrete planning horizon, in the presence of customers who are independently departing between one stage and the next with known stationary probabilities. The paper’s main contribution resides in proposing a quasi-polynomial-time approximation scheme for serving impatient customers. Specifically, letting n be the number of underlying customers, our algorithm identifies in [Formula: see text] time a service policy whose expected reward is within factor [Formula: see text] of the optimal adaptive reward. The method for deriving this approximation scheme synthesizes various stochastic analyses in order to investigate how the adaptive optimum is affected by alterations to several instance parameters, including the reward values, the departure probabilities, and the collection of customers itself.