This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper IPTC 19714, “Deep Learning for Monitoring Well-Testing Burner Efficiency,” by Darine Mansour, SPE, Hakim Arabi, and Yingwei Yu, SPE, Schlumberger, et al., prepared for the 2020 International Petroleum Technology Conference, Dhahran, Saudi Arabia, 13-15 January. The paper has not been peer reviewed. Copyright 2020 International Petroleum Technology Conference. Reproduced by permission. During well-testing operations, incorrect burner combustion could have an adverse effect on the environment or personal safety. Combustion efficiency is assessed by personnel who observe the flame. This practice lacks consistency and poses challenges, including environmental and safety considerations and issues with data. In the complete paper, the authors propose a solution that uses a deep neural network that learns from flame videos to define the quality of the combustion. The results are promising and show that this solution is a good candidate for real-time burner-efficiency monitoring and automatic alarm triggering and optimization. Introduction Automation of the evaluation of combustion efficiency would promote consistent combustion efficiency and timely reaction to undesirable combustion. In field operations, cameras are positioned for a direct view of the burner flames. The authors use a data set of video images captured by the cameras and labeled by domain experts as images of acceptable or undesirable combustion. The goal of the study is to automate human monitoring of the combustion by processing these images and classifying them automatically as images of acceptable or undesirable combustion. A supervised learning approach is applied to the problem. The approach consists of extracting the flame features from each image and feeding the features to a classifier that categorizes the combustion. The classifier is trained using the extracted features and labeled combustion images as input. The traditional approach to feature extraction is manual feature engineering, involving the extraction of handcrafted features using image-processing algorithms and domain knowledge. The process is tedious and time-consuming. Much effort is required to extract all relevant features in various contextual and environmental conditions. The extracted features are problem-dependent, and the process may require revisiting algorithms to adapt to new data that hold new features or new representations of existing features. Approach To achieve classification, a convolutional neural network (CNN) is trained using a data set of images labeled by domain experts. Using the image data set captured by well-testing cameras, a CNN with an architecture similar to VGG-16 that learns to extract powerful features from natural images is fine-tuned.