Abstract

In the fundus image segmentation process, more than one professional rater is usually required to mark the organ in the image to reduce the error rate of diagnosis. However, it is undeniable that each rater has an independent level of expertise and experience, and there are often individual differences in annotation areas, which can produce redundant or irrelevant information. In computer vision, for feature selection of multiple annotations, the majority voting principle or the preferred annotator principle is usually used, which cannot effectively remove the noise inside the annotator and the redundant information between annotators. To solve the above problems, we studied the information bottleneck method to remove the annotation noise and extend it to multi-expert annotation tasks in the fundus image field to extract consistent information between different views. We believe that consistent information is the key information that multiple experts agree on and have directivity. To the best of our knowledge, this is the first model for extracting multi-expert consistency information via multi-view information bottlenecks. Specifically, we use a multi-view information bottleneck approach to obtain the most concise expression of each view under label supervision. In addition, we propose a novel unsupervised information bottleneck method by maximizing mutual information between multiple view representations to preserve consistent information while eliminating redundant information that is not shared between views. A large number of experiments on several public datasets prove the effectiveness of the proposed model, and its performance is superior to existing techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call