Abstract

Segmenting polyps from colonoscopy images is very important in clinical practice since it provides valuable information for colorectal cancer. However, polyp segmentation remains a challenging task as polyps have camouflage properties and vary greatly in size. Although many polyp segmentation methods have been recently proposed and produced remarkable results, most of them cannot yield stable results due to the lack of features with distinguishing properties and those with high-level semantic details. Therefore, we proposed a novel polyp segmentation framework called contrastive Transformer network (CTNet), with three key components of contrastive Transformer backbone, self-multiscale interaction module (SMIM), and collection information module (CIM), which has excellent learning and generalization abilities. The long-range dependence and highly structured feature map space obtained by CTNet through contrastive Transformer can effectively localize polyps with camouflage properties. CTNet benefits from the multiscale information and high-resolution feature maps with high-level semantic obtained by SMIM and CIM, respectively, and thus can obtain accurate segmentation results for polyps of different sizes. Without bells and whistles, CTNet yields significant gains of 2.3%, 3.7%, 3.7%, 18.2%, and 10.1% over classical method PraNet on Kvasir-SEG, CVC-ClinicDB, Endoscene, ETIS-LaribPolypDB, and CVC-ColonDB respectively. In addition, CTNet has advantages in camouflaged object detection and defect detection. The code is available at https://github.com/Fhujinwu/CTNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call