Abstract

Medical image fusion is a process that aims to merge the important information from images with different modalities of the same organ of the human body to create a more informative fused image. In recent years, deep learning (DL) methods have achieved significant breakthroughs in the field of image fusionbecause of their great efficiency. The DL methods in image fusion have become an active topic due to their high feature extraction and data representation ability. In this work, stacked sparse auto-encoder (SSAE), a general category of deep neural networks, is exploited inmedical image fusion. The SSAE is an efficient technique for unsupervised feature extraction. It has high capability of complex data representation. The proposed fusion method is carried as follows. Firstly, the source images are decomposed into low- and high-frequency coefficient sub-bands with the non-subsampled contourlet transform (NSCT). The NSCT is a flexible multi-scale decomposition technique, and it issuperior to traditional decomposition techniques in several aspects. After that, the SSAE is implemented for feature extraction to obtain a sparse and deep representation from high-frequency coefficients. Then, thespatial frequencies are computed for the obtained features to be used for high-frequency coefficient fusion. After that, a maximum-based fusion rule is applied to fuse the low-frequency sub-band coefficients. The final integrated image is acquired by applying theinverse NSCT. The proposed method has been applied and assessed on various groups of medical image modalities. Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call