Abstract

Accurate medical image segmentation is essential for diagnosis, surgical planning and many other applications. Convolutional Neural Networks (CNNs) have become the state-of-the-art automatic segmentation methods. However, fully automatic results may still need to be refined to become accurate and robust enough for clinical use. We propose a deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy. We use one CNN to obtain an initial automatic segmentation, on which user interactions are added to indicate mis-segmentations. Another CNN takes as input the user interactions with the initial segmentation and gives a refined result. We propose to combine user interactions with CNNs through geodesic distance transforms, and propose a resolution-preserving network that gives a better dense prediction. In addition, we integrate user interactions as hard constraints into a back-propagatable Conditional Random Field. We validated the proposed framework in the context of 2D placenta segmentation from fetal MRI and 3D brain tumor segmentation from FLAIR images. Experimental results show our method achieves a large improvement from automatic CNNs, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.

Highlights

  • S EGMENTATION of anatomical structures is an essential task for a range of medical image processing applications such as image-based diagnosis, anatomical structure modeling, surgical planning and guidance

  • We present a new way to combine user interactions with Convolutional Neural Networks (CNNs) based on geodesic distance maps that are used as extra channels of the input for CNNs

  • We presented a deep learning-based interactive framework for 2D and 3D medical image segmentation

Read more

Summary

Introduction

S EGMENTATION of anatomical structures is an essential task for a range of medical image processing applications such as image-based diagnosis, anatomical structure modeling, surgical planning and guidance. Automatic segmentation methods [1] have been investigated for many years, they can rarely achieve sufficiently accurate and robust results to be useful for many medical imaging applications. This is mainly due to poor image quality (with noise, artifacts and low contrast), large variations among patients, inhomogeneous appearances brought by pathology, and variability of protocols among clinicians leading to different definitions of a given structure boundary. A good interactive segmentation method should require as few user interactions as possible, leading to interaction efficiency. It requires the user to provide a bounding box around

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call