Abstract
Medical image fusion technology has been widely used in clinical practice by doctors to better understand lesion regions through the fusion of multiparametric medical images. This paper proposes an automated fusion method based on a U-Net. Through neural network learning, a weight distribution is generated based on the relationship between the image feature information and the multifocus training target. The MRI image pair of prostate cancer (axial T2-weighted and ADC map) is fused using a strategy based on local similarity and Gaussian pyramid transformation. Experimental results show that the fusion method can enhance the appearance of prostate cancer in terms of both visual quality and objective evaluation.
Highlights
With the rapid development of medical imaging technology, medical imaging has become an integral part of clinical disease diagnosis and treatment planning
The NSCT method uses a nonsubsampled, contoured transform to decompose the image into low frequency and high frequency, uses different fusion rules according to these frequencies, separates the high-frequency part and the low-frequency part of the image, and designs different fusion rules for the high-pass subband and low-pass subband
The zero-learning fast medical image fusion method (ZLF) is a convolutional neural networks (CNNs) trained by a large amount of data, and it can directly input medical images of different modalities to generate weight maps for fusion
Summary
With the rapid development of medical imaging technology, medical imaging has become an integral part of clinical disease diagnosis and treatment planning. X. Huang et al.: Application of U-Net-Based Multiparameter Magnetic Resonance Image Fusion in prostate cancer staging [9]. The results showed that 69 satellite lesions were missed by all observers [10] This variability among observers emphasizes the importance of the automatic fusion of prostate multimodal MRI images to enhance the appearance of satellite lesions to assist diagnosis. The application of deep learning-based fusion of medical images is concentrated on multimodal image fusion (e.g., CT, PET, MRI) [16], [25]. This study combines the traditional Gaussian pyramid transform fusion framework in the transform domain with U-Net based on deep learning as a novel method for fusing medical images. This study applies the developed fusion method by combining the Laplacian pyramid transform and U-Net to fuse dual-parameter magnetic resonance images of the prostate.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have