This paper focuses on inverse problems to identify parameters by incorporating information from measurements. These generally ill-posed problems are formulated here in a probabilistic setting based on Bayes's theorem because it leads to a unique solution of the updated distribution of parameters. Many approaches build on Bayesian updating in terms of probability measures or their densities. However, the uncertainty propagation problems and their discretization within the stochastic Galerkin or collocation method are naturally formulated for random vectors, which calls for updating of random variables, i.e., a filter. Such filters typically build on some approximation to conditional expectation (CE). Specifically, the approximation of the CE with affine functions leads to the familiar Gauss--Markov--Kalman filter, which works best on linear or close to linear problems only. Our approach builds on a reformulation, which allows us to localize the operator of the CE to the point of measured value. The resulting conditioned expectation (CdE) predicts correctly the quantities of interest, e.g., conditioned mean and covariance, even for general highly nonlinear problems. The novel CdE allows straightforward numerical integration; particularly, the approximated covariance matrix is always positive definite for integration rules with positive weights. The theoretical results are confirmed by numerical examples.