Constructing accurate and generalizable approximators (surrogate models) for complex physico-chemical processes exhibiting highly non-smooth dynamics is challenging. The main question is what type of surrogate models we should construct and should these models be under-parameterized or over-parameterized. In this work, we propose new developments and perform comparisons for two promising approaches: manifold-based polynomial chaos expansion (m-PCE) and the deep neural operator (DeepONet), and we examine the effect of over-parameterization on generalization. While m-PCE enables the construction of a mapping by first identifying low-dimensional embeddings of the input functions, parameters, and quantities of interest (QoIs), a neural operator learns the nonlinear mapping via the use of deep neural networks. We demonstrate the performance of these methods in terms of generalization accuracy by solving the 2D time-dependent Brusselator reaction-diffusion system with uncertainty sources, modeling an autocatalytic chemical reaction between two species. We first propose an extension of the m-PCE by constructing a mapping between latent spaces formed by two separate embeddings of the input functions and the output QoIs. To further enhance the accuracy of the DeepONet, we introduce weight self-adaptivity in the loss function. We demonstrate that the performance of m-PCE and DeepONet is comparable for cases of relatively smooth input-output mappings. However, when highly non-smooth dynamics is considered, DeepONet shows higher approximation accuracy. We also find that for m-PCE, modest over-parameterization leads to better generalization, both within and outside of distribution, whereas aggressive over-parameterization leads to over-fitting. In contrast, an even highly over-parameterized DeepONet leads to better generalization for both smooth and non-smooth dynamics. Furthermore, we compare the performance of the above models with another recently proposed operator learning model, the Fourier Neural Operator, and show that its over-parameterization also leads to better generalization. Taken together, our studies show that m-PCE can provide very good accuracy at very low training cost, whereas a highly over-parameterized DeepONet can provide better accuracy and robustness to noise but at higher training cost. In both methods, the inference cost is negligible.