Abstract
Medical image segmentation annotation suffers from annotator variation due to the inherent differences in annotators’ expertise and the inherent blurriness of medical images. In practice, using opinions from multiple annotators can effectively reduce the impact of such annotator-related biases. Meanwhile, it is common practice in deep learning to fuse multiple annotations through methods such as majority voting, but these methods ignore the rich information of annotator preferences ingrained in the original multi-annotator annotations. To address this issue, we propose a modeling annotator variation and annotator preference (AVAP) framework for multiple annotations medical image segmentation, which consists of three parts. First, the widely used encoder-decoder backbone network use to extract feature maps of the image. Second, an annotator variation modeling (AVM) module is devised to estimate the annotation variation among multiple annotators by modeling multi-annotations as a multi-class segmentation problem. Third, an annotator preference modeling (APM) module estimate each annotator’s preference-involved segmentation by annotator encoding and dynamic filter learning. The experiment on the RIGA benchmark with multiple annotations shows that our AVAP framework outperforms a range of state-of-the-art (SOTA) multiple annotations segmentation methods. Further, we are the first to introduce dynamic filter learning into the annotator preference modeling.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.