Abstract

The Daily Production Optimization (DPO) problem is the task of maximizing production of hydrocarbons subject to operational constraints. Handling of uncertainty in model structure and parameters is of high importance to the usefulness of the solution. Ignoring these challenges will, most likely, render the solution either infeasible or the solution will not be an optimum of the plant. We suggest to apply a data-driven methodology to use state- and output-measurements from the plant to iteratively update the Optimal Control Problem (OCP) which are used to control the plant. The goal of the method is to tune the OCP such that the solution will go towards an optimum of the plant as the parameters are being updated. A Reinforcement Learning updating technique is used to update the optimization formulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call