Abstract

Automatic medical image segmentation has witnessed significant development with the success of large models on massive datasets. However, acquiring and annotating vast medical image datasets often proves to be impractical due to the time consumption, specialized expertise requirements, and compliance with patient privacy standards, etc. As a result, Few-shot Medical Image Segmentation (FSMIS) has become an increasingly compelling research direction. Conventional FSMIS methods usually learn prototypes from support images and apply nearest-neighbor searching to segment the query images. However, only a single prototype cannot well represent the distribution of each class, thus leading to restricted performance. To address this problem, we propose to Generate Multiple Representative Descriptors (GMRD), which can comprehensively represent the commonality within the corresponding class distribution. In addition, we design a Multiple Affinity Maps based Prediction (MAMP) module to fuse the multiple affinity maps generated by the aforementioned descriptors. Furthermore, to address intra-class variation and enhance the representativeness of descriptors, we introduce two novel losses. Notably, our model is structured as a dual-path design to achieve a balance between foreground and background differences in medical images. Extensive experiments on four publicly available medical image datasets demonstrate that our method outperforms the state-of-the-art methods, and the detailed analysis also verifies the effectiveness of our designed module.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call