We study the impact of random queueing delays stemming from traffic variability on the performance of a multicast session. With a simple analytical model, we analyze the throughput degradation within a multicast (one-to-many) tree under TCP-like congestion and flow control. We use the (max,plus) formalism together with methods based on stochastic comparison (association and convex ordering) and on the theory of extremes to prove various properties of the throughput. We first prove that the throughput predicted by a deterministic model is systematically optimistic. In the presence of light-tailed random delays, we show that the throughput decreases according to the inverse of the logarithm of the number of receivers. We find analytically an upper and a lower bound for the throughput degradation. Within these bounds, we characterize the degradation which is obtained for various tree topologies. In particular, we observe that a class of trees commonly found in IP multicast sessions is significantly more sensitive to traffic variability than other topologies.