Abstract

Medical imaging and information processing technologies are constantly evolving, resulting in a wide range of multimodality therapeutic pictures for clinical illness investigation. Physicians often require medical images produced using various modalities such as computed tomography (CT), magnetic resonance (MR), and positron emission computed tomography (PET) for clinical diagnosis. Many deep learning-based fusion methods have recently been proposed. In Convolutional Neural Network (CNN)-based fusion methods, only the last layer results are used as the image features, which result in the loss of useful information at middle layers. The fusion rule, based on the weighted averaging, causes noises in the source images and suppresses salient features of the image. In order to solve these issues, this paper proposes medical image fusion using Enhanced CNN (ECNN)- and Opposition-based Monarch Butterfly Optimization (OMBO)-based adaptive weighted fusion rule (AWFR). The ECNN contains feature extraction and reconstruction components. Both these components are trained in order to minimize the pixel loss and structural similarity loss. A pair of multimodal medical image is passed as input to the ECNN model to extract the low level and high level features. For the extracted features from ECNN, weighted fusion rule is applied in which OMBO algorithm is applied to adaptively optimize the weights of the fusion rule.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.