Abstract

When using convolutional neural networks (CNNs) for segmentation of organs and lesions in medical images, the conventional approach is to work with inputs and outputs either as single slice [two-dimensional (2D)] or whole volumes [three-dimensional (3D)]. One common alternative, in this study denoted as pseudo-3D, is to use a stack of adjacent slices as input and produce a prediction for at least the central slice. This approach gives the network the possibility to capture 3D spatial information, with only a minor additional computational cost. In this study, we systematically evaluate the segmentation performance and computational costs of this pseudo-3D approach as a function of the number of input slices, and compare the results to conventional end-to-end 2D and 3D CNNs, and to triplanar orthogonal 2D CNNs. The standard pseudo-3D method regards the neighboring slices as multiple input image channels. We additionally design and evaluate a novel, simple approach where the input stack is a volumetric input that is repeatably convolved in 3D to obtain a 2D feature map. This 2D map is in turn fed into a standard 2D network. We conducted experiments using two different CNN backbone architectures and on eight diverse data sets covering different anatomical regions, imaging modalities, and segmentation tasks. We found that while both pseudo-3D methods can process a large number of slices at once and still be computationally much more efficient than fully 3D CNNs, a significant improvement over a regular 2D CNN was only observed with two of the eight data sets. triplanar networks had the poorest performance of all the evaluated models. An analysis of the structural properties of the segmentation masks revealed no relations to the segmentation performance with respect to the number of input slices. A post hoc rank sum test which combined all metrics and data sets yielded that only our newly proposed pseudo-3D method with an input size of 13 slices outperformed almost all methods. In the general case, multislice inputs appear not to improve segmentation results over using 2D or 3D CNNs. For the particular case of 13 input slices, the proposed novel pseudo-3D method does appear to have a slight advantage across all data sets compared to all other methods evaluated in this work.

Highlights

  • For both organ segmentation and lesion segmentation, the most common deep learning (DL) model is the convolutional neural networkSegmentation of organs and pathologies are common activi- (CNN)

  • The mean value over all samples for each metric are plotted as a function of the input size, and are given for the triplanar, 2D, pseudo-3D with d = 3 through d = 13, and 3D models, and for the UNet and SegNet backbones. These results in terms of dice similarity coefficient (DSC), HD95, relative absolute volume difference (RAVD), and average symmetric surface distance (ASSD) are tabulated in Tables 17–20 in the Supplementary Material, respectively, along with summaries of the experiment setups per data set

  • This study systematically evaluated pseudo-3D convolutional neural networks (CNNs), where a stack of adjacent slices is used as input for a prediction on the central slice

Read more

Summary

Introduction

For both organ segmentation and lesion segmentation, the most common DL model is the convolutional neural network. Segmentation of organs and pathologies are common activi- (CNN). The manual annotation of such regions of medical volumes by CNNs consists of training on and predicting the individual 2D slices independently, the interest interest is aided by various software toolkits for image enhancement, automated contouring, and structure analysis in all fields on image-guided radiotherapy.[1,2,3] Over the recent has shifted in recent years toward full 3D convolutions in vo1umetric neural networks.[5,6,7,8,9] Volumetric convolution kernels have the advantage of taking interslice context into years, deep learning (DL) has emerged as a very powerful account, preserving more of the spatial information than concept in the field of medical image analysis. The ability to what is possible when using 2D convolutions within slices

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call