This paper treats a discrete-time Markov decision model with an infinite planning horizon and no discounting. A “bias-optimal” policy for this decision problem satisfies a criterion that is more selective than maximizing the gain rate. The problem of computing a bias-optimal policy, also treated by Veinott in 1966, is here parsed into a sequence of three simple Markov decision problems, each of which can be solved by linear programming or policy iteration.