Abstract

Hydraulic fracturing is a technique to extract oil and gas from shale formations, and obtaining a uniform proppant concentration along the fracture is key to its productivity. Recently, various model predictive control schemes have been proposed to achieve this objective. But such controllers require an accurate and computationally efficient model which is difficult to obtain given the complexity of the process and uncertainties in the rock formation properties. In this article, we design a model-free data-based reinforcement learning controller which learns an optimal control policy through interactions with the process. Deep reinforcement learning (DRL) controller is based on the Deep Deterministic Policy Gradient algorithm that combines Deep-Q-network with actor-critic framework. Additionally, we utilize dimensionality reduction and transfer learning to quicken the learning process. We show that the controller learns an optimal policy to obtain uniform proppant concentration despite the complex nature of the process while satisfying various input constraints.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call