Abstract
Visual effects of medical image have a great impact on clinical assistant diagnosis. At present, medical image fusion has become a powerful means of clinical application. The traditional medical image fusion methods have the problem of poor fusion results due to the loss of detailed feature information during fusion. To deal with it, this paper proposes a new multimodal medical image fusion method based on the imaging characteristics of medical images. In the proposed method, the non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. The high-frequency coefficients are fused by a parameter‐adaptive pulse-coupled neural network (PAPCNN) model. The method is based on parameter adaptive and optimized connection strength β adopted to promote the performance. The low-frequency coefficients are merged by the convolutional sparse representation (CSR) model. The experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional PCNN algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms.
Highlights
Visual effects of medical image have a great impact on clinical assistant diagnosis
The non-subsampled shearlet transform (NSST) decomposition is first performed on the source images to obtain high-frequency and low-frequency coefficients. e high-frequency coefficients are fused by a parameter-adaptive pulse-coupled neural network (PAPCNN) model. e method is based on parameter adaptive and optimized connection strength β adopted to promote the performance. e lowfrequency coefficients are merged by the convolutional sparse representation (CSR) model. e experimental results show that the proposed method solves the problems of difficult parameter setting and poor detail preservation of sparse representation during image fusion in traditional pulsecoupled neural network (PCNN) algorithms, and it has significant advantages in visual effect and objective indices compared with the existing mainstream fusion algorithms
Methods for Comparison. e proposed fusion method was compared with the existing five representative methods: the multimodal image fusion based on parameter-adaptive pulse-coupled neural network (NSST-PAPCNN) [7], the multimodal image fusion based on convolutional sparse representation (CSR) [5], the multimodal image fusion based on multiscale transform and sparse representation (MST-SR) [18], the multimodal image fusion based on sparse representation and pulse-coupled neural network (SR-PCNN) [19], and the multimodal image fusion based on non-subsampled contourlet transform and sparse representation and pulsecoupled neural network (NSCT-SR-PCNN) [10]
Summary
To avoid difficulties in manually setting free parameters, in this paper, a parameter-adaptive PCNN (PAPCNN) model [7] was proposed to fuse the high-frequency coefficients obtained by NSST decomposition. E convolutional sparse representation algorithm overcomes the shortcomings of traditional sparse representation with limited ability to preserve details and high sensitivity to registration errors We believe that it is effective for the fusion of low-frequency coefficients. The application of the CSR model is very effective for the fusion of the low-frequency coefficients obtained by MST. Based on the above considerations, the CSR model was introduced into the fusion of MST low-frequency coefficients. The inverse NSST reconstruction was performed on the fusion band HlF,k, LF to obtain the fused image F
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Computational and Mathematical Methods in Medicine
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.