Abstract

<p>Structural geological models are often calculated on a specific spatial resolution – for example in the form of grid representations, or when surfaces are extracted from implicit fields. However, the structural inventory in these models is limited by the underlying mathematical formulations. It is therefore logical that, above a certain resolution, no additional information is added to the representation.</p><p>We evaluate here if Deep Neural Networks can be trained to obtain a high-resolution representation based on a low-resolution structural model, at different levels of resolution. More specifically, we test the use of state-of-the-art Generative Adversarial Networks (GAN’s) for image superresolution in the context of 2-D geological model sections. These techniques aim to learn the hidden structure or information in high resolution image data set and then reproduce highly detailed and super resolved image from its low resolution counterpart. We propose the use of Generative Adversarial Networks GANS for super resolution of geological images and 2D geological models represented as images. In this work a generative adversarial network called SRGAN has been used which uses a perceptual loss function consisting of an adversarial loss, mean squared error loss and content loss for photo realistic image super resolution. First results are promising, but challenges remain due to the different interpretation of color in images for which these GAN’s are typically used, whereas we are mostly interested in structures.</p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call