Abstract

To solve the problems of low image contrast and low feature representation in infrared and visible image fusion, an image fusion algorithm based on latent low-rank representation (LatLRR) and non-subsampled shearlet transform (NSST) methods is proposed. First, infrared and visible images are decomposed into base subbands, saliency subbands and sparse noise subbands by the LatLRR model. Then, the base subbands are decomposed into low-frequency and high-frequency coefficients by NSST, and a feature extraction algorithm based on VGGNet and a logical weighting algorithm based on filtering are proposed to merge the coefficients. An adaptive threshold algorithm based on the regional energy ratio is proposed to fuse the saliency subbands. Finally, the fused base subbands are reconstructed, the sparse noise subbands are discarded, and a fused image is obtained by combining the subband information after fusion. Experimental results show that for the fused image produced, the algorithm performs well in both subjective and objective evaluation.

Highlights

  • Image fusion involves extracting and integrating the effective information contained in two or more images collected by different sensors from the same scene through specific methods to obtain composite images with rich information and excellent visual effects and meet the needs of subsequent research and processing steps

  • The remainder of this paper is organized as follows: Section II introduces the latent low-rank representation (LatLRR) model and the image decomposition model of non-subsampled shearlet transform (NSST); Section III discusses the saliency of subbands, the fusion method of the low-frequency and high-frequency coefficients of the base subbands, and image reconstruction; Section IV explains the experimental setup used in this paper, and the results of the experiment are analyzed; Section V gives the conclusions of this study

  • This paper proposes a model based on a combined LatLRR and NSST algorithm for image fusion

Read more

Summary

INTRODUCTION

Image fusion involves extracting and integrating the effective information contained in two or more images collected by different sensors from the same scene through specific methods to obtain composite images with rich information and excellent visual effects and meet the needs of subsequent research and processing steps. A feature extraction algorithm based on VGGNet through VGG-16 is used in model training to extract the image features for deep fusion and avoid complex operations; this tool is combined with weighting rules based on a filtering algorithm to retain the maximum amount of background information and edge information, resulting in high contrast and rich details for significant features in fused images. The remainder of this paper is organized as follows: Section II introduces the LatLRR model and the image decomposition model of NSST; Section III discusses the saliency of subbands, the fusion method of the low-frequency and high-frequency coefficients of the base subbands, and image reconstruction; Section IV explains the experimental setup used in this paper, and the results of the experiment are analyzed; Section V gives the conclusions of this study.

NON-SUBSAMPLED SHEARLET TRANSFORM
LOW-FREQUENCY COEFFICIENT FUSION FOR BASE SUBBANDS
HIGH-FREQUENCY COEFFICIENT FUSION FOR BASE SUBBANDS
IMAGE RECONSTRUCTION
FEASIBILITY ASSESSMENT
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call