Abstract

Abstract The convolution kernel was initially invented and applied in the domain of nature language machine learning. In the original linguistic study, the convolution kernel decomposed words into parts, and evaluated the parts using a simple kernel function. This inspired us to apply the convolution kernel method to data from permanent downhole gauges (PDG) by decomposing the pressure transient into a series of pressure responses to the previous flow rate change events. In this study, the data mining process was conducted in two stages, namely learning and prediction processes. In the learning process, the PDG data were used to train the convolution-kernel-based data mining algorithm until the algorithm converged. After convergence, the reservoir model is obtained implicitly in the form of polynomials in the high-dimensional Hilbert space defined by the convolution kernel function. In the prediction process, a pressure prediction was made by the data mining algorithm according to an arbitrary given flow rate history (usually a constant flow rate history for simplicity). This flow rate history and the corresponding pressure prediction revealed the reservoir model underlying the variable PDG data. 13 synthetic cases representing different well/reservoir models and three real field cases were used to test this approach. The method recovered the reservoir model successfully in all cases.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.