Abstract

AbstractMedical image segmentation plays an important role in the diagnosis and treatment of diseases. Using fully supervised deep learning methods for medical image segmentation requires large medical image datasets with pixel-level annotations, but medical image annotation is time-consuming and labor-intensive. On this basis, we study a medical image segmentation algorithm based on image-level annotation. Existing weakly supervised semantic segmentation algorithms based on image-level annotations rely on class activation maps (CAM) and saliency maps generated by preprocessing as pseudo-labels to supervise the network. However, the CAMs generated by many existing methods are often over-activated or under-activated, while most of the saliency map generated by preprocessing is also rough and cannot be updated online, which will affect the segmentation effect. In response to this situation, we study a weakly supervised medical image segmentation algorithm based on multi-task learning, which uses graph convolution to correct for improper activation, and uses similarity learning to update pseudo-labels online to improve segmentation performance. We conducted extensive experiments on ISIC2017 skin disease images to validate the proposed method and experiments show that our method achieves a Dice evaluation metric of 68.38% on this dataset, show that our approach outperforms state-of-the-art methods.KeywordsWeakly supervised semantic segmentationMulti-task learningGraph convolution

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call