Abstract
Currently, Convolutional Neural Networks achieve good performance in automatic image segmentation situations; however, they have not demonstrated sufficiently accurate and robust results in the case of more general and interactive systems. Also, they have been designed specifically for visual features and cannot integrate enough anatomical knowledge inside the learned models they produce. To address these problems, we propose a novel machine-learning-based framework for interactive medical image segmentation. The proposed method incorporates local anatomical knowledge learning capabilities into a bounding box-based segmentation pipeline. Region specific voxel classifiers can be learned and combined to make the model adaptive to different anatomical structures or image modalities. In addition, a spatial relationship learning mechanism is integrated to capture and use additional topological (anatomical) information. New learning procedures have been defined to integrate both types of information (visual features to characterize each substructure and spatial relationships for a relative positioning between the substructures) in a unified model. During incremental and interactive segmentation, local substructures are localized one by one, enabling partial image segmentation. Bounding box positioning within the entire image is performed automatically using previously learned spatial relationships or by the user when necessary. Inside each bounding box, atlas-based methods or CNNs that are dedicated to each substructure can be applied to automatically obtain each local segmentation. Experimental results show that (1) the proposed model is robust for segmenting objects with a small amount of training images; (2) the accuracy is similar to other methods but allows partial segmentation without requiring a global registration; and (3) the proposed method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods due to its spatial relationship learning capabilities.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.