Abstract

The Magnetic Resonance Images (MRI) which can be used to segment brain tumors are 3D images. To make use of 3D information, a method that integrates the segmentation results of 3 2D Fully Convolutional Neural Networks (FCNNs), each of which is trained to segment brain tumor images from axial, coronal, and sagittal views respectively, is applied in this paper. Integrating multiple FCNN models by fusing their segmentation results rather than by fusing into one deep network makes sure that each FCNN model can still be tested by 2D slices, guaranteeing the testing efficiency. An averaging strategy is applied to do the fusing job. This method can be easily extended to integrate more FCNN models which are trained to segment brain tumor images from more views, without retraining the FCNN models that we already have. In addition, 3D Conditional Random Fields (CRFs) are applied to optimize our fused segmentation results. Experimental results show that, integrating the segmentation results of multiple 2D FCNNs obviously improves the segmentation accuracy, and 3D CRF greatly reduces false positives and improves the accuracy of tumor boundaries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call