When the Vlasov equation is investigated numerically using the method of test particles, the particle-particle interactions that inevitably arise in the simulation (but are not present in the Vlasov equation itself) result in an accumulation of errors which eventually drive the collection of test particles toward a state of classical thermal equilibrium. We estimate the rate at which these errors accumulate. The Vlasov equation plays a central role in classical (and semiclassical) time-dependent mean field theory, and has been used to model a wide variety of many-body processes, from the gravitational N-body problem [1], to plasma physics [2], to nuclear dynamics [3]. While the content of the Vlasov equation is conceptually simple — interactions among many particles are replaced by a common mean-field potential — solutions are harder to come by, and must in general be sought numerically. This is often accomplished with the test particle method: a swarm of numerical particles is used to simulate a distribution f(r, p,t) in one-body phase space, and the mean-field potential in which these test particles evolve is obtained from this distribution. Thus, while the Vlasov equation replaces a physical manybody problem with the self-consistent evolution of a one-body phase-space distribution, the test particle method in turn replaces the Vlasov equation with a numerical many-body problem. This raises the issue of convergence: for a given number of test particles, and over a given length of time, how closely can we expect the evolution of f(r, p,t) as obtained