Abstract

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 174799, “Geology-Driven Estimated-Ultimate-Recovery Prediction Using Deep Learning,” by L. Crnkovic-Friis and M. Erlandson, Peltarion Energy, prepared for the 2015 SPE Annual Technical Conference and Exhibition, Houston, 28–30 September. The paper has not been peer reviewed. The authors present a geology-driven deep-learning estimated-ultimate-recovery (EUR)-prediction model for multistage hydraulically fractured horizontal wells in tight gas and oil reservoirs. The novel approach was made possible by recent developments in the field of deep learning and by the use of big data (more than 200,000 geological data points and more than 800 wells). A deep neural network (DNN) was trained to learn the relationship between geology and the average EUR (estimated by decline analysis). Introduction Currently, the consensus in the literature and among geologists is that it is impossible to achieve a good hydrocarbon-in-place estimate for tight or unconventional resources by entering geological parameters into an equation. Volumetric analysis, for instance, gives only a very crude estimate and often does not work at all. This consensus is not wrong, and yet highly skilled teams of geologists and exploration engineers manage to make good guesses as to where the sweet spots are, even in regions where they have barely adequate geological data. The answer to that apparent contradiction is that a simple mathematical model, such as an equation, does not come close to the multilayered complexity and abstract-analysis skills needed to solve the problem. A DNN is a computational model composed of multiple processing layers to learn representations of data with multiple levels of abstraction. It uses artificial neurons from interconnected layers of software modeled after the columns of neurons found in the brain’s cortex—the part of the brain that deals with complex tasks, such as vision and language, that require many levels of abstraction. DNNs use an iterative algorithm to indicate how the model should change its internal parameters used to compute the representation in each layer from the representation in the preceding layer. Training is performed by presenting raw input data to the DNN and providing desired output data. The model then learns how to transform its internal parameters in order to minimize the error between the desired output and the actual output. This process is repeated iteratively many millions of times until the error stops decreasing. In regard to hydrocarbon estimates made on the basis of geological data, one faces a similar problem with a corresponding solution. Before one can make an educated guess about the hydrocarbons in place, one must be several levels of abstraction above the raw geological data. Data The data used to train and test the model consist of two data sets, one with geological data and one with well-production data. The region covered was a part of the Eagle Ford shale and included both oil and dry-gas wells. The geological parameters primarily used in the model were thickness, porosity, bulk density, vitrinite reflectance, water saturation, total organic carbon, and brittleness, all of which can be estimated at an early stage when evaluating a new play. When the geological data are derived from a small number of well logs and interpolated into a grid, the resulting grids have low resolution and are quite smooth. The combination of all geological parameters can, however, provide a much-higher-resolution map for the predicted variable. The final geological data set had more than 200,000 data points. The well data consisted of production logs for more than 800 wells that have been producing for several years, so a reliable decline-analysis figure for the EUR could be obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call