Abstract In the standard formulation of the classical denoising problem, one is given a probabilistic model relating a latent variable $\varTheta \in \varOmega \subset{\mathbb{R}}^{m} \; (m\ge 1)$ and an observation $Z \in{\mathbb{R}}^{d}$ according to $Z \mid \varTheta \sim p(\cdot \mid \varTheta )$ and $\varTheta \sim G^{*}$, and the goal is to construct a map to recover the latent variable from the observation. The posterior mean, a natural candidate for estimating $\varTheta $ from $Z$, attains the minimum Bayes risk (under the squared error loss) but at the expense of over-shrinking the $Z$, and in general may fail to capture the geometric features of the prior distribution $G^{*}$ (e.g. low dimensionality, discreteness, sparsity). To rectify these drawbacks, in this paper we take a new perspective on this denoising problem that is inspired by optimal transport (OT) theory and use it to study a different, OT-based, denoiser at the population level setting. We rigorously prove that, under general assumptions on the model, this OT-based denoiser is mathematically well-defined and unique, and is closely connected to the solution to a Monge OT problem. We then prove that, under appropriate identifiability assumptions on the model, the OT-based denoiser can be recovered solely from information of the marginal distribution of $Z$ and the posterior mean of the model, after solving a linear relaxation problem over a suitable space of couplings that is reminiscent of standard multimarginal OT problems. In particular, due to Tweedie’s formula, when the likelihood model $\{ p(\cdot \mid \theta ) \}_{\theta \in \varOmega }$ is an exponential family of distributions, the OT-based denoiser can be recovered solely from the marginal distribution of $Z$. In general, our family of OT-like relaxations is of interest in its own right and for the denoising problem suggests alternative numerical methods inspired by the rich literature on computational OT.