Abstract

Multimodal medical image fusion, which aims at integrating different multimodal information into a single output, plays an important role in the clinical applicability of medical images such as noninvasive diagnosis and image-guided surgery. The main motivation of this study is to model the coefficient selection step of medical image fusion as a pattern recognition task. The proposed method first decomposes the source images by the tetrolet transform. Subsequently, different activity measures are used to extract salient features from patches of the tetrolet subbands. The features are then fed to the sparse un-mixing by variable splitting and augmented Lagrangian (SUnSAL) classifier. Coefficients to be incorporated into the fused image are chosen by the classifier. Finally, the cycle-spinning technique is exploited to avoid artifacts. Experimental results on three pairs of medical images validate the reliability and credibility of the proposed method in clinical applications when compared with four state-of-the-art fusion methods. More specifically, the proposed framework does not suffer from contrast reduction, color distortion and loss of fine details.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.