We study a class of sampled stochastic optimization problems, where the underlying state process has diffusive dynamics of the mean-field type. We establish the existence of optimal relaxed controls when the sample set has finite size. The core of our paper is to prove, via $\Gamma$-convergence, that the minimizer of the finite sample relaxed problem converges to that of the limiting optimization problem as the sample size tends to infinity. We connect the limit of the sampled objective functional to the unique solution, in the trajectory sense, of a nonlinear Fokker--Planck--Kolmogorov equation in a random environment. We highlight the connection between the minimizers of our optimization problems and the optimal training weights of a deep residual neural network.