Abstract

Since the optical signal is attenuated when it propagates forward, the uncertainty of the measurement at the end of the optical fiber is increased, which restricts the performance of the Raman Scattering Distributed Temperature Sensing System (RDTS). Therefore, it is essential to study the signal denoising method of the system. Existing signal denoising methods, such as wavelet transform, median filtering, singular value decomposition, 1DDCNN, etc., still have room for improvement in parameter adjustment, computational complexity, and denoising effect. Thus, we have proposed a novel deep-learning model called the down-sampling double network model (DSDN) for RDTS signal denoising with high performance based on down-sampling and convolutional neural network (CNN). The DSDN model includes a down-sampling part, a one-dimensional full convolution part, and a ResNet part. It uses synthetic data as a training dataset. We have designed model performance evaluation experiments based on synthetic data and real RDTS data under different noise intensities. The root mean square error (RMSE) of the RDTS signal after DSDN noise reduction has been reduced to 0.189 °C, which is better than 2.117 °C of wavelet, 2.077 °C of median filter, 0.968 °C of 1DDCNN model. The mean absolute error (MAE) has been reduced to 0.165 °C, which is better than 1.716 °C of the wavelet transform, 1.402 °C of median filter, 0.564 °C of 1DDCNN. And smooth has been reduced to 0.074, which is better than wavelet's 1.298, median filter's 1.171, and 1DDCNN's 0.149. The results show that the DSDN model proposed in this article has excellent signal denoising performance and good robustness on different noise intensities and can be applied to RDTS systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call