Abstract

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 204086, “Determining Rig State From Computer Vision Analytics,” by Crispin Chatar, SPE, and Suhas Suresha, Schlumberger, and Laetitia Shao, Stanford University, et al. The paper has not been peer reviewed. While companies cannot agree on a standard definition of “rig state,” they can agree that, as further use is made of remote operations and automation, rig-state calculation is mandatory in some form. By use of a machine-learning model that relies exclusively on videos collected on the rig floor to infer rig states, overcoming the limitations of existing methods is possible as the industry moves into a future of rigs featuring advanced technologies. Introduction The complete paper presents a machine-learning pipeline implemented to determine rig state from videos captured on the floor of an operating rig. The pipeline is composed of two parts. First, the annotation pipeline matches each frame of the video data set to a rig state. A convolutional neural network (CNN) is used to match the time of the video with corresponding sensor data. Second, additional CNNs are trained, capturing both spatial and temporal information, to extract an estimation of rig state from the videos. The models are trained on a data set of 3 million frames on a cloud platform using graphics processing units. Some of the models used include a pretrained visual geometry group (VGG) network, a convolutional 3D (C3D) model, and a two-stream model that uses optical flow to capture temporal information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call