Abstract

AbstractMedical image fusion technology makes clinical diagnosis and treatment more accurate. This technique can solve the existing problem that single mode medical images conveying insufficient information. Therefore, the key to this technology is to retain as much as possible information in the original medical image of multiple modes. However, the existing methods often lose source image detail with a low contrast and cause color distortion. This article proposed a novel multi‐modal medical image fusion framework. Our framework has four key steps. First, extracting the super‐resolution images from anatomical images via a deep neural network is performed. Second, decomposing the source images into detail‐enhanced approximate and residual images, and then processing by local Laplacian filtering. Third, a convolution neural network is used to complete the mapping of the source image to the feature map and the local energy‐based strategy is used to integrate the coefficients. Finally, the inversed local Laplacian pyramid is adopted to reconstruct the fused image. The experimental results prove that the fused image via the proposed method has the characteristics of high contrast, high brightness, and high color saturation and great advantages in retaining the color and structural information.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.