Abstract

Accurate and efficient medical image segmentation plays an important role in subsequent clinical applications such as diagnosis and surgical planning. This paper proposes an efficient interactive framework based on a graph convolutional network (GCN) for medical imagesegmentation. The initial segmentation results showed that a set of boundary control points can be generated for further interactive segmentation. We presented an adaptive interactive manner that allows the user to click on the boundary for fast interaction or drag the erroneous predicted control points for accurate correction. Furthermore, we proposed an interactive segmentation network (referred to as IVIF-GCN) to learn user experience in the interactive process by transforming interactive cues into annotations. In IVIF-GCN, a module of information fusion of image features and vertex position features (IVIF) is proposed to learn the location relationship between the current vertex and the neighboring vertices. Finally, the locations of control points around the interaction point is predicted and updatedautomatically. The proposed method achieves mean Dice of 96.6% and 91.3% on PROMISE12 and our in-house nasopharyngeal carcinoma (NPC) test sets, respectively. The experimental results showed that the proposed method outperforms the state-of-the-art segmentationmethods. The proposed interactive medical image segmentation method can efficiently improve segmentation results for clinical applications in the absence of training data. The GUI tool based on our method is available at https://github.com/Tian-lab/IGMedSeg.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call