Abstract

Accurate and massive medical image annotation data is crucial for diagnosis, surgical planning, and deep learning in the development of medical images. However, creating large annotated datasets is challenging because labeling medical images is complicated, laborious, and time-consuming and requires expensive and professional medical skills. To significantly reduce the cost of labeling, an interactive image annotation framework based on composite geodesic distance is proposed, and medical images are labeled through segmentation. This framework uses Attention U-net to obtain initial segmentation based on adding user interaction to indicate incorrect segmentation. Another Attention U-net takes the user's interaction with the initial segmentation as input. It uses a composite geodesic distance transform to convert the user's interaction into constraints, giving accurate segmentation results. To further improve the labeling efficiency for large datasets, this paper validates the proposed framework against the segmentation background of a self-built prostate MRI image datasets. Experimental results show that the proposed method achieves higher accuracy in less interactive annotation and less time than traditional interactive annotation methods with better Dice and Jaccard results. This has important implications for improving medical diagnosis, surgical planning, and the development of deep-learning models in medical imaging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call