Abstract

In the process of separating blended data, conventional methods based on sparse inversion assume that the primary source is coherent and the secondary source is randomized. The L1-norm, the commonly used regularization term, uses a global threshold (GT) to process the sparse spectrum in the transform domain; however, when the threshold is relatively high, more high-frequency information from the primary source will be lost. For this reason, we have analyzed the generation principle of blended data based on the convolution theory and then conclude that the blended data are only randomly distributed in the spatial domain. Taking the slope-constrained frequency-wavenumber ( f- k) transform as an example, we adopt a frequency-dependent threshold, which reduces the high-frequency loss during the deblending process. Then, we use a structure-weighted threshold (SWT) in which the energy from the primary source is concentrated along the wavenumber direction. The combination of frequency and SWTs effectively improves the deblending performance. Model and field data find that our frequency-structure weighted threshold has better frequency preservation than the GT. The weighted threshold can better retain the high-frequency information of the primary source, and the similarity between other frequency-band data and the unblended data has been improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call