Abstract

Convolutional neural networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they have not demonstrated sufficiently accurate and robust results for clinical use. In addition, they are limited by the lack of image-specific adaptation and the lack of generalizability to previously unseen object classes (a.k.a. zero-shot learning). To address these problems, we propose a novel deep learning-based interactive segmentation framework by incorporating CNNs into a bounding box and scribble-based segmentation pipeline. We propose image-specific fine tuning to make a CNN model adaptive to a specific test image, which can be either unsupervised (without additional user interactions) or supervised (with additional scribbles). We also propose a weighted loss function considering network and interaction-based uncertainty for the fine tuning. We applied this framework to two applications: 2-D segmentation of multiple organs from fetal magnetic resonance (MR) slices, where only two types of these organs were annotated for training and 3-D segmentation of brain tumor core (excluding edema) and whole brain tumor (including edema) from different MR sequences, where only the tumor core in one MR sequence was annotated for training. Experimental results show that: 1) our model is more robust to segment previously unseen objects than state-of-the-art CNNs; 2) image-specific fine tuning with the proposed weighted loss function significantly improves segmentation accuracy; and 3) our method leads to accurate results with fewer user interactions and less user time than traditional interactive segmentation methods.

Highlights

  • D EEP learning with convolutional neural networks (CNNs) has achieved state-of-the-art performance for automated medical image segmentation [1]

  • We present the first attempt to employ CNNs to deal with previously unseen objects (a.k.a. zeroshot learning) in the context of image segmentation

  • While previous works studied zero-shot learning for image classification [34], this paper focused on the context of medical image segmentation

Read more

Summary

Introduction

D EEP learning with convolutional neural networks (CNNs) has achieved state-of-the-art performance for automated medical image segmentation [1]. Though leveraging user interactions often leads to more robust segmentations, an interactive method should require as short user time as possible to reduce the burden on users. Motivated by these observations, we investigate combining CNNs with user interactions for medical image segmentation to achieve higher segmentation accuracy and robustness with fewer user interactions and less user time. There are very few studies on using CNNs for interactive segmentation [3]–[5] This is mainly due to the requirement of large amounts of annotated images for training, the lack of imagespecific adaptation and the demanding balance among model complexity, inference time and memory space efficiency

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call