Abstract
Two complementary solution strategies to the least-squares migration problem with sparseness& continuity constraints are proposed. The applied formalism explores the sparseness of curvelets on the reflectivity and their invariance under the demigrationmigration operator. Sparseness is enhanced by (approximately) minimizing a (weighted) `-norm on the curvelet coefficients. Continuity along imaged reflectors is brought out by minimizing the anisotropic diffusion or total variation norm which penalizes variations along and in between reflectors. A brief sketch of the theory is provided as well as a number of synthetic examples. Technical details on the implementation of the optimization strategies are deferred to an accompanying paper: implementation Introduction Least-squares migration and migration deconvolution have been topics that received a recent flare of interest [7, 8]. This interest is for a good reason because inverting for the normal operator (the demigration-migration operator) restores many of the amplitude artifacts related to acquisition and illumination imprints. However, the downside to this approach is that unregularized least-squares tends to fit noise and smear the energy. In addition, artifacts may be created due to imperfections in the model and possible null space of the normal operator [11]. Regularization by minimizing an energy functional on the reflectivity can alleviate some of these problems, but may go at the expense of resolution. Non-linear functionals such as ` minimization partly deal with the resolution problem but ignore bandwidth-limitation and continuity along the reflectors [12]. Independent of above efforts, attempts have been made to enhance the continuity along imaged reflectors by applying anisotropic diffusion to the image [4]. The beauty of this approach is that it brings out the continuity along the reflectors. However, the way this method is applied now leaves room for improvement regarding (i) the loss of resolution; (ii) the non-integral and non-data constrained aspects, i.e. this method is not constrained by the data which may lead to unnatural results and ’overfiltering’. In this paper, we make a first attempt to bring these techniques together under the umbrella of optimization theory and modern image processing with basis functions such as curvelet frames [3, 2]. Our approach is designed to (i) deal with substantial amounts of noise (SNR ≤ 0); (ii) use the optimal (sparse & local) representation properties of curvelets for reflectivity; (iii) exploit the near diagonalization of the normal operator by curvelets [2]; and (iv) use non-linear estimation, norm minimization and optimization techniques to enhance the continuity along reflectors [5]. Optimization strategies for seismic imaging After linearization the forward model has the following form d = Km+ n, (1) where K is a demigration operator given by the adjoint of the migration operator; m the model wih the reflectivity and n white Gaussian noise with standard deviation σn (colored noise can be accounted for). Irrespective of the type of migration operator (our discussion is independent of the type of migration operator and we allow for post-stack Kirchoff as well as ’wave-equation’ operators), two complementary optimization strategies are being investigated in our group. These strategies are designed to exploit the sparseness & invariance properties of curvelet frames in conjunction with the enhancement of the overall continuity along reflectors. More specifically, the first method [6, 5] preconditions the migration operator, yielding a reformulation of the normal equations into a standard denoising problem Fd = ≈Id z}|{ FFx+ Fn (2) y = x+ (3) with ∗ the adjoint; F· = KC∗Γ−1· the curvelet-frame preconditioned migration operator with Γ· = diag(CK∗KC∗)· ≈ CK∗KC∗· by virtue Theorem 1.1 of [2], which states that Green’s functions are nearly diagonalized by curvelets; C, C∗ the curvelet transform and its transpose; x the preconditioned model related to the reflectivity according m = CΓ−1x and close to white Gaussian noise (by virtue of the preconditioning). Applying a soft thresholding to Eq. 2 with a threshold proportional to the standard deviation σn of the noise on the data, gives an estimate for the preconditioned model with some sparseness [see for details
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.