Abstract

Besides its importance for greenhouse emission reduction, the remote detection, localization and quantification of gas leaks in industrial facilities remains a challenging problem in industry and research. In that sense, the development of new data processing techniques that allow deriving new and/or more accurate information about the gas leaks from made measurements has gained more attention in the recent years. This becomes apparent from the increased use of optical gas imaging (OGI) cameras (specialised mid-wave infrared cameras e.g. for methane and carbon dioxide) along with image processing and computer vision techniques, to tackle these challenges. In this work, deep-learning-based optical flow methods are evaluated for determining gas velocities from gas images of an OGI camera. For this, a dataset of simulated and real gas images under controlled and real conditions is used for supervised training and validation of two different state of the art CNNs for optical flow computation: FlowNetC, FlowNet2 and PWC-Net. Classical optical flow methods based on variational methods are also considered and the differences in performance and accuracy between classical and deep-learning-based methods are shown. In addition, FlowNet2 is further improved for working with gas images by fine tuning the network weights. This approach has demonstrated to make FlowNet2 more reliable and less sensitive to image noise and jitter in the experiments. For further validation, a set of real gas images acquired in a wind channel and one from a biogas plant with reference mean gas velocities from a 3D anemometer are being used. The results show that the fine-tuned version of FlowNet2 (FNet2-G) allow computing larger optical flow magnitudes than classical optical flow methods while being less sensitive to image noise under field conditions. The obtained results also show the potential of deep-learning-based approaches for image processing tasks such as gas segmentation, disparity computation and scene flow in stereo gas images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call