Abstract

The timely detection of small leaks from liquid pipelines poses a significant challenge for pipeline operations. One technology considered for continual monitoring is distributed temperature sensing (DTS), which utilizes a fiber-optic cable to provide distributed temperature measurements along a pipeline segment. This measurement technique allows for a high accuracy of temperature determination over long distances. Unexpected deviations in temperature at any given location can indicate various physical changes in the environment, including contact with a heated hydrocarbon due to a pipeline leak. The signals stemming from pipeline leaks may not be significantly greater than the noise in the DTS measurements, so care must be taken to configure the system in a manner that can detect small leaks while rejecting non-leak temperature anomalies. There are many factors that influence the frequency and intensity of the backscattered optical signal. This can result in noise in the fine-grained temperature sensing data. Thus, the DTS system must be tuned to the nominal temperature profile along the pipe segment. This customization allows for significant sensitivity and can utilize different leak detection thresholds at various locations based on normal temperature patterns. However, this segment-specific tuning can require a significant amount of resources and time. Additionally, this configuration exercise may have to be repeated as pipeline operating conditions change over time. Thus, there is a significant need and interest in advancing existing DTS processing techniques to enable the detection of leaks that today go undetected by DTS due to their signal response being too close to the noise floor and/or requiring significant resources to achieve positive results. This paper discusses the recent work focused on using machine learning (ML) techniques to detect leak signatures. Initial proof-of-concept results provide a more robust methodology for detecting leaks and allow for the detection of smaller leaks than are currently detectable by typical DTS systems, with low false alarm rates. A key use of ML approaches is that the system can “learn” about a given pipeline on its own without the need to utilize resources for pipeline segment-specific tuning. The potential to have a self-taught system is a powerful concept, and this paper discusses some key initial findings from applying ML-based techniques to optimize leak detection capabilities of an existing DTS system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call