Abstract

Polyp segmentation is important in the early diagnosis and treatment of colorectal cancer. Since polyps vary in shape, size, color, and texture, accurate polyp segmentation is very challenging. One promising solution is to model the contextual relation for each pixel. However, previous methods only focus on learning the dependencies between the position within an individual image and ignore the contextual relation across different images. In this paper, we propose a memory-based feature enhancement module to capture the cross-image contextual relations. Specifically, we first present a polyp-centric representation. Then a semantic memory is designed to extract the polyp prototypes across different images. The feature at one position can be further enhanced by the contextual embeddings stored in the semantic memory. The enhanced feature is propagated into the features of the previous levels as the multi-scale guidance. The experimental results show that our method achieves better performance than other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.