In this work, we investigate a class of stochastic aggregative games, where each player has an expectation-valued objective function depending on its local strategy and the aggregate of all players’ strategies. We propose a distributed algorithm with operator extrapolation to search for the Nash equilibrium, in which each player maintains an estimate of this aggregate by exchanging this information with its neighbors over a time-varying network, and updates its equilibrium estimate through the mirror descent method. Specially, we embed an operator extrapolation utilizing two step historical gradient samples at the search direction so as to accelerate the convergence. Under the generalized strongly monotone assumption characterized by the Bregman’s distance on the pseudo-gradient mapping, we prove that the proposed algorithm can achieve the optimal convergence rate O(1/k) in terms of the expected Bregman distance to the Nash equilibrium. Finally, the algorithm performance is demonstrated via numerical simulations.