Abstract

Lung cancer has become the most common malignant tumor in recent years. Tumor segmentation from medical images is substantial for the benign and malignant classification of tumors and the choice of subsequent therapy plans. Current diagnostic methods of standard lung images in clinic mainly depend on computed tomography (CT). Magnetic Resonance Imaging (MRI) has advantages of no radiation to human body and higher sensitivity to soft tissue lesions compared with CT. Multi-modality imaging is the attribute of MRI, which can give more information for tumor delineation. In this paper, we proposed a novel approach for the segmentation of lung tumors from multi-modal MR images which combines fully convolutional network (FCN) based semantic network (U-Net) and hyper-densely connected CNN model (Hyper-DenseNet) for multi-modality fusion. Our proposed method has two aspects of architecture: multi-modality feature fusion architecture and encoder-decoder based semantic segmentation architecture. The combination of these two architectures overcomes the deficiency of single modality U-Net and Hyper-DenseNet. The effectiveness of our proposed method has been demonstrated by comparative experiments on 564 scan slices of 89 lung cancer patients. Our method yields better experiment results by Dice Similarity Coefficient (DSC) to evaluation than single-modal U-Net and Hyper-DenseNet without encoder-decoder. The more encouraging result, especially, is that the segmentation results of the tumors adjacent to normal tissues and organs or the inflammatory tissues has a more remarkable improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call