DCCM: Dual Data Consistency Guided Consistency Model for Inverse Problems

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Existing diffusion models for inverse problems have demonstrated impressive performance but suffer from prohibitive sampling complexity due to lengthy iterative sampling procedures. While introducing pre-trained consistency models (CMs) as priors holds promise for fast and high-quality sampling, theoretical disparities between CMs and diffusion models remain, hindering the application of CMs in solving inverse problems. To address this issue, we propose a novel framework, named Dual Data Consistency Guided Consistency Model (DCCM), that for the first time to solve inverse problems with pretrained CM priors. We establish a denoising interpretation of CMs to set up the equivalence between CMs and denoisers and incorporate CM in a theoretically sound fashion. Consequently, we develop refined data consistency to facilitate optimization with CM priors and avoid local minima caused by the nonlinearity of degradation operators. Furthermore, we introduce the data consistency shortcut that leverages the manifold hypothesis to approximate refined data consistency and bypass backpropagation for enhanced sampling speed without reconstruction quality loss. Extensive experiments demonstrate DCCM achieves state-of-the-art performance in terms of reconstruction quality and sampling speed in a wide range tasks of image deblurring, super-resolution, and inpainting.

Similar Papers
  • Conference Article
  • Cite Count Icon 3
  • 10.3997/2214-4609.201400947
Seismic History Matching in Fractured Reservoirs Using a Consistent Stiffness-permeability Model – Focus on the Aperture
  • Jan 1, 2010
  • A Shahraini + 2 more

This paper proposes a method for characterization of naturally fractured reservoirs by quantitative integration of seismic and production data. The method is based on unified (T-matrix) model for the effective hydraulic and elastic properties of fractured porous media and a (nonlinear) Bayesian method of inversion which provides information about uncertainties as well as mean (or maximum likelihood) values. We consider a fractured reservoir as a porous medium containing a single set of vertical fractures characterized by an unknown fracture density, azimuthal orientation and aperture. We then look at the problem of fracture parameter estimation as a nonlinear Bayesian inverse problem and try to estimate the unknown fracture parameters by joint inversion of seismic AVAZ data and dynamic production data. Once the fracture parameters have been estimated the corresponding anisotropic stiffness and permeability can be estimated using consistent models. A synthetic example is provided to explain the workflow. It shows that seismic and production data complement each other, in the sense that the seismic data resolve a non-uniqueness in the fracture orientation and the production data helps to recover the true fracture aperture and permeability, because production data are more sensitive to the fracture aperture rather than the seismic data.

  • Research Article
  • Cite Count Icon 35
  • 10.1016/j.image.2021.116505
A Regularization by Denoising super-resolution method based on genetic algorithms
  • Sep 20, 2021
  • Signal Processing: Image Communication
  • M Nachaoui + 2 more

A Regularization by Denoising super-resolution method based on genetic algorithms

  • Research Article
  • Cite Count Icon 5
  • 10.1186/s42492-024-00175-6
Noise suppression in photon-counting computed tomography using unsupervised Poisson flow generative models
  • Sep 23, 2024
  • Visual Computing for Industry, Biomedicine, and Art
  • Dennis Hein + 6 more

Deep learning (DL) has proven to be important for computed tomography (CT) image denoising. However, such models are usually trained under supervision, requiring paired data that may be difficult to obtain in practice. Diffusion models offer unsupervised means of solving a wide range of inverse problems via posterior sampling. In particular, using the estimated unconditional score function of the prior distribution, obtained via unsupervised learning, one can sample from the desired posterior via hijacking and regularization. However, due to the iterative solvers used, the number of function evaluations (NFE) required may be orders of magnitudes larger than for single-step samplers. In this paper, we present a novel image denoising technique for photon-counting CT by extending the unsupervised approach to inverse problem solving to the case of Poisson flow generative models (PFGM)++. By hijacking and regularizing the sampling process we obtain a single-step sampler, that is NFE = 1. Our proposed method incorporates posterior sampling using diffusion models as a special case. We demonstrate that the added robustness afforded by the PFGM++ framework yields significant performance gains. Our results indicate competitive performance compared to popular supervised, including state-of-the-art diffusion-style models with NFE = 1 (consistency models), unsupervised, and non-DL-based image denoising techniques, on clinical low-dose CT data and clinical images from a prototype photon-counting CT system developed by GE HealthCare.

  • Research Article
  • 10.5075/epfl-thesis-5175
Geodesic Active Fields
  • Jan 1, 2011
  • Infoscience (Ecole Polytechnique Fédérale de Lausanne)
  • Dominique Zosso

Geodesic Active Fields

  • Research Article
  • Cite Count Icon 20
  • 10.1016/j.advwatres.2011.05.001
A multiscale method for subsurface inverse modeling: Single-phase transient flow
  • May 13, 2011
  • Advances in Water Resources
  • Jianlin Fu + 2 more

A multiscale method for subsurface inverse modeling: Single-phase transient flow

  • Research Article
  • Cite Count Icon 7
  • 10.1111/j.1365-246x.1988.tb02003.x
Seismogravimetric method: principles, algorithms, results
  • May 1, 1988
  • Geophysical Journal International
  • V I Starostenko + 2 more

SUMMARY A seismogravimetric problem has been posed and solved by a new method which permits the construction of consistent velocity and density models for the Earth's crust and upper mantle based on observations of the seismic and gravity fields. The seismogravimetric method involves a two-step solving procedure. At the first step, a velocity model, V, is constructed as a result of the formulation and solution of the inverse kinematic seismic problem for the velocity increments to a certain known normal velocity Vo. At the second step, the obtained velocity model, V, is transformed, consistent with the known velocity-density relations σ(V), to the density model σ, whose gravity field is calculated. Using the difference between the observed and calculated gravity-field magnitudes we solve the inverse gravimetric problem for the correction rσ to density σ. As a result, the relation σ(V) is refined for the main blocks of the geologic sequence studied. If the density model does not quite fit the observed gravity field, or does not agree with the geological data available, the entire process of solving the problem is repeated, taking account of the results obtained previously. The inverse seismic and gravimetric problems formulated are reduced to sets of linear equations. To solve these, a stable iteration technique has been developed which is intended for specific geophysical problems and serves as a computational basis for the seismogravimetric method. The applicability principles of the seismogravimetric method are described. The method has been tried in special tests and on practical material. The test problem ‘Moscow’ has been chosen as a model example. The efficiency of the method when applied to practical observations is illustrated by interpretation of the DSS data obtained on the Kiev-Gomel’ (Ukraine-Byelorussia) line.

  • Research Article
  • Cite Count Icon 170
  • 10.1137/0326056
Maximum Likelihood Estimation of Discrete Control Processes
  • Sep 1, 1988
  • SIAM Journal on Control and Optimization
  • Rust John

Consider the following “inverse stochastic control” problem. A statistician observes a realization of a controlled stochastic process $\{ d_t ,x_t \} $ consisting of the sequence of states $x_t$, and decisions $d_t$ of an agent at times $t = 1, \cdots ,T$. The null hypothesis is that the agent’s behavior is generated from the solution to a Markovian decision problem. The inverse problem is to use the data $\{ d_t ,x_t \} $ to go backward and “uncover” the agent's objective function U, and his beliefs about the law of motion of the state variables p. The problem is complicated by the fact that the statistician generally only observes a subset $x_t$ of the state variables $(x_t ,\eta _t )$ observed by the agent. This paper formulates the inverse problem as a problem of statistical inference, explicitly accounting for unobserved state variables$\eta _t $, in order to produce a nondegenerate and internally consistent statistical model. Specifically, the functions U and p are assumed to depend on a vector of unknown parameters $\theta $ known by the agent but not by the statistician. The agent’s preferences and expectations are uncovered by finding a parameter vector $\hat \theta $ that maximizes the likelihood function for the observed sample of data. The difficulty is that neither the dynamic programming problem nor the associated likelihood function has an a priori known functional form. In general the solution is only described recursively via Bellman’s “principle of optimality.” This paper derives a nested fixed-point maximum likelihood algorithm that computes e and the associated value function $V_{\hat \theta } $ for a class of discrete control processes$(d_t ,x_t )$, where the control variable $d_t$ is restricted to a finite set of alternatives. Given M independent realizations of $(d_t ,x_t )$ for T time periods, it is shown that 8 converges to the true parameter $\theta ^ * $ with probability 1 and has an asymptotic Gaussian distribution as M (or the number of periods T) approaches infinity. Uniform convergence of the algorithm is established by showing that the estimated value function $V_{\hat \theta } $ (a random element in a Banach space B) converges with probability 1 to the true value function and has an asymptotic Gaussian distribution in B.

  • Research Article
  • Cite Count Icon 46
  • 10.1016/j.bea.2022.100038
End-to-end prediction of multimaterial stress fields and fracture patterns using cycle-consistent adversarial and transformer neural networks
  • Jun 1, 2022
  • Biomedical Engineering Advances
  • Eric L Buehler + 1 more

End-to-end prediction of multimaterial stress fields and fracture patterns using cycle-consistent adversarial and transformer neural networks

  • Research Article
  • Cite Count Icon 12
  • 10.1137/24m1636071
CoCoGen: Physically Consistent and Conditioned Score-Based Generative Models for Forward and Inverse Problems
  • Apr 1, 2025
  • SIAM Journal on Scientific Computing
  • Christian Jacobsen + 2 more

CoCoGen: Physically Consistent and Conditioned Score-Based Generative Models for Forward and Inverse Problems

  • Research Article
  • Cite Count Icon 7
  • 10.1088/1742-2132/8/2/011
Improved characterization of fault zones by quantitative integration of seismic and production data
  • Mar 28, 2011
  • Journal of Geophysics and Engineering
  • Aamir Ali + 2 more

This paper proposes a method for the parameterization and characterization of fault facies models including a fault core and a fault damage zone containing either fractures or deformation bands, typically associated with carbonate and sandstone reservoirs, respectively. We represent the faulted reservoir models with a relatively small number of parameters and focus on the inverse problem; that is, how to estimate transmissibility of the fault core and the parameters of the fractures or deformation bands that determine the effective stiffness and permeability tensors in the damage zone. Our workflow is based on a consistent stiffness-permeability model for the fractured or composite porous media in the damage zone, and a Bayesian (Monte Carlo Markov chain) method of inversion, which provides information about uncertainties as well as the most likely values of the model parameters. For simplicity, we have assumed that the damage zone consists of a single set of fractures or deformation bands that are parallel with the (vertical) fault core, but the forward modelling part of our workflow can easily be extended to deal with more complex situations involving multiple sets of fractures and/or deformation bands that are characterized by different shapes and orientations. The results of our numerical experiments suggest that one can indeed obtain an improved characterization of fault zones by quantitative integration of seismic AVAZ and production data using the workflow presented in this paper.

  • Research Article
  • Cite Count Icon 21
  • 10.1016/j.jappgeo.2014.07.023
Invariant models in the inversion of gravity and magnetic fields and their derivatives
  • Aug 7, 2014
  • Journal of Applied Geophysics
  • Simone Ialongo + 2 more

Invariant models in the inversion of gravity and magnetic fields and their derivatives

  • Research Article
  • Cite Count Icon 8
  • 10.1115/1.2801271
Accuracy of Discrete Models for the Solution of the Inverse Dynamics Problem for Flexible Arms, Feasible Trajectories
  • Sep 1, 1997
  • Journal of Dynamic Systems, Measurement, and Control
  • H C Moulin + 1 more

The inverse dynamics problem for a single link flexible arm is considered. The tracking order of consistent and lumped finite element models is derived and compared with the tracking order of the continuous model when there is no tip-mass. These comparisons show that discrete models fail to identify the tracking order of a modelled continuous system. A frequency domain analysis shows that an increase in the model order extends the well-modelled low-frequency range and, at the same time, increases the inadequacy in the high-frequency range. As a result, inverse dynamics solutions computed with discrete models do not converge to the continuous solution as the model order increases. The use of high-frequency filters allows us to construct a convergent numerical procedure. A conjecture about the tracking order is presented when there is a tip mass. It is shown that the same results are obtained if superposition of modes rather than finite elements is used.

  • Conference Article
  • Cite Count Icon 11
  • 10.3997/2214-4609.20141826
Efficient Inference of Reservoir Parameter Distribution Utilizing Higher Order SVD Reparameterization
  • Sep 8, 2014
  • Proceedings
  • E Gildin + 1 more

Reservoir parameter inference is a challenging problem to many of the reservoir simulation workflows, especially when it comes to real reservoirs with high degree of complexity and non-linearity, and high dimensionality. In a history matching problem that adapts the reservoir properties grid blocks, the inverse problem leads to an ill-posed and very costly optimization schemes. In this case, it is very important to perform geologically consistent reservoir parameter adjustments as data is being assimilated in the history matching process. Therefore, ways to reduce the number of reservoir parameters need to be sought after. In this paper, we introduce the advantages of a new parameterization method utilizing higher order singular value decomposition (HOSVD) which is not only computationally more efficient than other known dimensionality reduction methods such as, SVD and DCT, but also provides a consistent model in terms of reservoir geology. HOSVD power is due to its ability to supply a reliable low-dimensional reconstructed model while keeping higher order statistical information and geological characteristics of reservoir model. In HOSVD, we take the snapshots in a 2D or 3D approach, i.e., do not vectorize original replicates, and stack them up into a tensor form, i.e. a multi-way array in multilinear algebra which leads to implementing tensor decomposition. Technically, we performed HOSVD to find the best lower rank approximation of this tensor that is an optimization problem utilizing alternating least square method. This results in a more consistent reduced basis. We applied this novel parameterization method to the SPE10 benchmark reservoir model to show its promising parameterization performance. We illustrate its advantages by comparing its performance to the regular SVD (PCA) in a history matching framework using EnKF, as well as characterization performance of the ensemble-based history matching approaches along with HOSVD. Overall, HOSVD outperforms SVD in terms of reconstruction and estimation performance.

  • Conference Article
  • 10.2118/165936-ms
A Practical Approach for Enhancing Model Accuracy Using Pressure and Permeability Function
  • Sep 16, 2013
  • Tariq Al-Zahrani + 1 more

In a typical reservoir simulation model, varying details of its geological and petrophysical properties need to be captured accurately. There are single-phase regions, where a considerable savings in costs may be realized if large blocks are used, while there are regions, particularly near well-bores, where fine grids are required to adequately capture high and low permeability streaks. The main goal of reservoir characterization is to get a reservoir model that accurately describes and predicts dynamic fluid flow paths and production/injection performances of the wells. This implies a correct description of extreme values of connections of petrophysical properties, mainly permeability and porosity.The fine scale model can handle most structural and geological complexity with few compromises. Such a model will allow us to quantify uncertainties and also to run risk analysis. This ensures a realistic and consistent model that honors and maintains the latest field information. In case of updating or adjustment to the simulation model, this can be updated in the geological model as well.Moreover, history matching, by nature, is a very complex inverse problem that can be computationally intensive and practically difficult for very large multimillion cell reservoir models. Therefore, the use of an optimal parameterization and refinement grid is crucial to get fast and valid history matching results.This paper shows that fine models provide much more accurate results than the up scaled coarse model. A new method will be introduced to enhance history match quality by conditioning a second version of the permeability model to a relation found between errors in pressure and KH from well test data. A comparison of results including core permeability and saturation from both models are presented. The positive impact of this new method on the history match process is discussed in details.

  • Research Article
  • Cite Count Icon 16
  • 10.1029/2006wr005236
Relation between fractional flow models and fractal or long‐range 2‐D permeability fields
  • Apr 1, 2007
  • Water Resources Research
  • Jean‐Raynald De Dreuzy + 1 more

Fractional flow models introduced by Barker (1988) have been increasingly popular as means of interpreting nonclassical drawdown curves obtained from well tests. Fractional flow models are intrinsically isotropic scaling models depending to first order on two exponents n and dw expressing the dimension of the structure available to flow and the flow slowdown, respectively. We study the fractional flow induced either by geometrically scaling structures such as Sierpinski‐ and percolation‐like fractal media or by hydraulically scaling media such as long‐range continuous correlated media. First, percolation and Sierpinski structures have two well‐separated dw values in the range [2.6, 3] and [1.9, 2.5], respectively. The bottlenecks, characteristic of percolation, induce a more anomalous transport (larger dw values) than the impervious zones present at all scales of Sierpinskis. Second, the realization‐based values of n and dw depend both on global and on local characteristics like the fractal dimension and the permeability around the well, respectively. Finally, solving the inverse problem on anomalous transient well test responses consists in comparing the (n, dw) realization‐based values with field data. Indeed, well tests performed from a unique pumping well must be taken as realization‐based results. For the site of Ploemeur (Brittany, France), from which n and dw have been previously determined (Le Borgne et al., 2004), the only consistent model is given by the continuous multifractals. However, the values obtained from continuous multifractals cover most of the (n, dw) plane, and realization‐based results are not selective in terms of model. So this should be replaced by the comparison of (n, dw) values averaged over different pumping well locations, which however requires a significantly larger quantity of field tests.

Save Icon
Up Arrow
Open/Close