We provide a unifying framework for distributed convex optimization over time-varying networks, in the presence of constraints and uncertainty, features that are typically treated separately in the literature. We adopt a proximal minimization perspective and show that this set-up allows us to bypass the difficulties of existing algorithms while simplifying the underlying mathematical analysis. We develop an iterative algorithm and show the convergence of the resulting scheme to some optimizer of the centralized problem. To deal with the case where the agents’ constraint sets are affected by a possibly common uncertainty vector, we follow a scenario-based methodology and offer probabilistic guarantees regarding the feasibility properties of the resulting solution. To this end, we provide a distributed implementation of the scenario approach, allowing agents to use a different set of uncertainty scenarios in their local optimization programs. The efficacy of our algorithm is demonstrated by means of a numerical example related to a regression problem subject to regularization.