Abstract

The medical image fusion process integrates the information of multiple source images into a single image. This fused image can provide more comprehensive information and is helpful in clinical diagnosis and treatment. In this paper, a new medical image fusion algorithm is proposed. Firstly, the original image is decomposed into a low-frequency sub-band and a series of high-frequency sub-bands by using nonsubsampled shearlet transform (NSST). For the low-frequency sub-band, kirsch operator is used to extract the directional feature maps from eight directions and novel sum-modified-Laplacian (NSML) method is used to calculate the significant information of each directional feature map, and then, combining a sigmod function and the significant information updated by gradient domain guided image filtering (GDGF), calculate the fusion weight coefficients of the directional feature maps. The fused feature map is obtained by summing the convolutions of the weight coefficients and the directional feature maps. The final fused low-frequency sub-band is obtained by the linear combination of the eight fused directional feature maps. The modified pulse coupled neural network (MPCNN) model is used to calculate the firing times of each high-frequency sub-band coefficient, and the fused high-frequency sub-bands are selected according to the firing times. Finally, the inverse NSST acts on the fused low-frequency sub-band and the fused high-frequency sub-bands to obtain the fused image. The experimental results show that the proposed medical image fusion algorithm expresses some advantages over the classical medical image fusion algorithms in objective and subjective evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call