Abstract

Medical imaging studies perform an important role in the analysis of diagnostic data, and its treatment procedures in clinical applications. Because of the variety of imaging technologies, multiple medical imaging modalities focus on multiple kinds of organ/tissue segmentation. Computed tomography (CT) imaging is effective on implants and bones, whereas magnetic resonance (MR) imaging is effective on soft tissues with anatomical information. To obtain the necessary data for exact clinical analysis, surgeons frequently require combinational analysis of different medical imaging data, those are taken by multiple modalities. The aim of this paper is to build a system that would help in detection of the brain tumor from fused MR and CT images through the process of the proposed methodology. The method further uses a deep learning convolutional neural network with pyramid generation kernels (DL-CNN-PGK) for extracting high-level features in order to merge MR and CT images. This data can later be extended to segment a tumor from a fused image using Non-local Euclidean median filtered adaptive angled covariance with Gaussian kernel-based FCM clustering (NLEM-AACGK-FCM). This makes the process of tumor segmentation for cancer analysis and detection quite accurate and efficient. Extensive simulation results demonstrate the superiority of proposed hybrid fusion-based segmentation approaches for medical imagery over both conventional medical image fusion and segmentation approaches, as well as image quality metrics for fusion and segmentation. In addition, several medical statistical parameters such as accuracy, specificity and sensitivity are computed to demonstrate the effectiveness of this proposed fusion-based segmentation approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call