Abstract

Endoscopic optical coherence tomography (OCT) possesses the capability to non-invasively image internal lumens; however, it is susceptible to saturation artifacts arising from robust reflective structures. In this study, we introduce an innovative deep learning network, ATN-Res2Unet, designed to mitigate saturation artifacts in endoscopic OCT images. This is achieved through the integration of multi-scale perception, multi-attention mechanisms, and frequency domain filters. To address the challenge of obtaining ground truth in endoscopic OCT, we propose a method for constructing training data pairs. Experimental in vivo data substantiates the effectiveness of ATN-Res2Unet in reducing diverse artifacts while preserving structural information. Comparative analysis with prior studies reveals a notable enhancement, with average quantitative indicators increasing by 45.4–83.8%. Significantly, this study marks the inaugural exploration of leveraging deep learning to eradicate artifacts from endoscopic OCT images, presenting considerable potential for clinical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call