In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy ε Furthermore, the complexity of the class $$\mathcal{H}$$ of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class $$\mathcal{H}$$ can be approximated to within a factor k, then there is a simple tracking algorithm for $$\mathcal{H}$$ which can achieve a probability ε of making a mistake if the target movement rate is at most a constant times $$ \in ^2 /(k(d + k)\ln \frac{1}{ \in })$$ , where d is the Vapnik-Chervonenkis dimension of $$\mathcal{H}$$ . Also, we show that if $$\mathcal{H}$$ is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for $$\mathcal{H}$$ which tolerates drift rates up to a constant times $$ \in ^2 /(d^2 \ln \frac{1}{ \in })$$ . In addition, we prove complementary results for the classes of halfspaces and axis-aligned hyperrectangles showing that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times ε2/d.