Abstract

Medical image fusion techniques primarily integrate the complementary features of different medical images to acquire a single composite image with superior quality, reducing the uncertainty of lesion analysis. However, the simultaneous extraction of more salient features and less meaningless details from medical images by using multi-scale transform methods is a challenging task. This study presents a two-scale fusion framework for multimodal medical images to overcome the aforementioned limitation. In this framework, a guided filter is used to decompose source images into the base and detail layers to roughly separate the two characteristics of source images, namely, structural information and texture details. To effectively preserve most of the structural information, the base layers are fused using the combined Laplacian pyramid and sparse representation rule, in which an image patch selection-based dictionary construction scheme is introduced to exclude the meaningless patches from the source images and enhance the sparse representation capability of the pyramid-decomposed low-frequency layer. The detail layers are subsequently merged using a guided filtering-based approach, which enhances contrast level via noise filtering as much as possible. The fused base and detail layers are reconstructed to generate the fused image. We experimentally verify the superiority of the proposed method by using two basic fusion schemes and conducting comparison experiments on nine pairs of medical images from diverse modalities. The comparison of the fused results in terms of visual effect and objective assessment demonstrates that the proposed method provides better visual effect with an improved objective measurement because it effectively preserves meaningful salient features without producing abnormal details.

Highlights

  • Owing to advancements in imaging technologies, medical images have been essential to clinical investigation and disease analysis

  • The mean value of image patches determines whether an image patch is fit for selection, which is capable of excluding meaningless image patches and improving sparse representation capability for the pyramiddecomposed low-frequency layer

  • A two-scale multimodal medical image framework based on guided filtering and sparse representation is presented in this study

Read more

Summary

INTRODUCTION

Owing to advancements in imaging technologies, medical images have been essential to clinical investigation and disease analysis. The combination of the Laplacian pyramid and sparse representation (LP-SR) was proposed to overcome the limitations of traditional MST-based and SR-based methods in medical image fusion, achieving favorable fused results. (1) This work proposes a two-scale fusion method for multimodal medical images This method can capture meaningful and salient information without producing abnormal details. (2) The proposed method uses the guided filtering for a simple structure-texture decomposition of the source images to obtain the base and detail layers, which enhances the edge information of the fused image. (4) In particular, a spatial degraded dictionary using an image patch selection-based scheme is learned by the K-singular value decomposition (K-SVD) algorithm from two source images In this phase, the mean value of image patches determines whether an image patch is fit for selection, which is capable of excluding meaningless image patches and improving sparse representation capability for the pyramiddecomposed low-frequency layer.

RELATED WORK
DETAIL LAYER FUSION
ANALYSIS AND SETTING OF ALGORITHM PARAMETER
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.