Abstract

Multi-modal medical image fusion is a challenging yet important task for precision diagnosis and surgical planning in clinical practice. Although single feature fusion strategy such as Densefuse has achieved inspiring performance, it tends to be not fully preserved for the source image features. In this paper, a deep multi-fusion framework with classifier-based feature synthesis is proposed to automatically fuse multi-modal medical images. It consists of a pre-trained autoencoder based on dense connections, a feature classifier and a multi-cascade fusion decoder with separately fusing high-frequency and low-frequency. The encoder and decoder are transferred from MS-COCO datasets and pre-trained simultaneously on multi-modal medical image public datasets to extract features. The feature classification is conducted through Gaussian high-pass filtering and the peak signal to noise ratio thresholding, then feature maps in each layer of the pre-trained Dense-Block and decoder are divided into high-frequency and low-frequency sequences. Specifically, in proposed feature fusion block, parameter-adaptive pulse coupled neural network and l1-weighted are employed to fuse high-frequency and low-frequency, respectively. Finally, we design a novel multi-cascade fusion decoder on total decoding feature stage to selectively fuse useful information from different modalities. We also validate our approach for the brain disease classification using the fused images, and a statistical significance test is performed to illustrate that the improvement in classification performance is due to the fusion. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance in both qualitative and quantitative evaluations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.