Image stitching is a traditional but challenging computer vision task. The goal is to stitch together multiple images with overlapping areas into a single, natural-looking, high-resolution image without ghosts or seams. This article aims to increase the field of view of gastroenteroscopy and reduce the missed detection rate. To this end, an improved depth framework based on unsupervised panoramic image stitching of the gastrointestinal tract is proposed. In addition, preprocessing for aberration correction of monocular endoscope images is introduced, and a C2f module is added to the image reconstruction network to improve the network's ability to extract features. A comprehensive real image data set, GASE-Dataset, is proposed to establish an evaluation benchmark and training learning framework for unsupervised deep gastrointestinal image splicing. Experimental results show that the MSE, RMSE, PSNR, SSIM and RMSE_SW indicators are improved, while the splicing time remains within an acceptable range. Compared with traditional image stitching methods, the performance of this method is enhanced. In addition, improvements are proposed to address the problems of lack of annotated data, insufficient generalization ability and insufficient comprehensive performance in image stitching schemes based on supervised learning. These improvements provide valuable aids in gastrointestinal examination.
Read full abstract