Abstract

Image stitching is a traditional but challenging computer vision task. The goal is to stitch together multiple images with overlapping areas into a single, natural-looking, high-resolution image without ghosts or seams. This article aims to increase the field of view of gastroenteroscopy and reduce the missed detection rate. To this end, an improved depth framework based on unsupervised panoramic image stitching of the gastrointestinal tract is proposed. In addition, preprocessing for aberration correction of monocular endoscope images is introduced, and a C2f module is added to the image reconstruction network to improve the network's ability to extract features. A comprehensive real image data set, GASE-Dataset, is proposed to establish an evaluation benchmark and training learning framework for unsupervised deep gastrointestinal image splicing. Experimental results show that the MSE, RMSE, PSNR, SSIM and RMSE_SW indicators are improved, while the splicing time remains within an acceptable range. Compared with traditional image stitching methods, the performance of this method is enhanced. In addition, improvements are proposed to address the problems of lack of annotated data, insufficient generalization ability and insufficient comprehensive performance in image stitching schemes based on supervised learning. These improvements provide valuable aids in gastrointestinal examination.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.