Abstract
Seismic inversion plays an essential role in the exploration and development of oil and gas reservoirs. With the development of neural networks, deep learning has achieved a wide range of applications in seismic inversion due to its powerful feature extraction and nonlinear fitting capabilities. However, when applying the traditional deep learning methods to seismic inversion based on trace-by-trace, problems such as overfitting and poor continuity of prediction results are prone to occur. To address these problems, we propose a closed-loop Unet with geophysical constraints based on spatial background information, called SG-CUnet. In SG-CUnet, sparse reflection coefficients and seismic forward modeling process are used as geophysical constraints of the network to improve the stability and accuracy of prediction results. Meanwhile, the spatial information of seismic data is added to the training process of SG-CUnet to improve the lateral continuity of prediction results. The effectiveness of proposed SG-CUnet is verified by synthetic Marmousi2 and field data examples. In synthetic data applications, the SG-CUnet proposed in this paper can give more accurate impedance estimation results than other methods. Furthermore, in the field area application, we combine the proposed SG-CUnet with a transfer learning strategy for semi-supervised training to solve the limited labeled data problem. The predicted results show that the semi-supervised SG-CUnet has better reservoir structure details and lateral continuity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.