Abstract

The fusion quality of infrared and visible image is very important for subsequent human understanding of image information and target processing. The fusion quality of the existing infrared and visible image fusion methods still has room for improvement in terms of image contrast, sharpness and richness of detailed information. To obtain better fusion performance, an infrared and visible image fusion algorithm based on latent low-rank representation (LatLRR) nested with rolling guided image filtering (RGIF) is proposed that is a novel solution that integrates two-level decomposition and three-layer fusion. First, infrared and visible images are decomposed using LatLRR to obtain the low-rank sublayers, saliency sublayers, and sparse noise sublayers. Then, RGIF is used to perform further multiscale decomposition of the low-rank sublayers to extract multiple detail layers, which are fused using convolutional neural network (CNN)-based fusion rules to obtain the detail-enhanced layer. Next, an algorithm based on improved visual saliency mapping with weighted guided image filtering (IVSM-GIF) is used to fuse the low-rank sublayers, and an algorithm for adaptive weighting of regional energy features based on Laplacian pyramid decomposition is used to fuse the saliency sublayers. Finally, the fused low-rank sublayer, saliency sublayer, and detail-enhanced layer are used to reconstruct the final image. The experimental results show that the proposed method outperforms other state-of-the-art fusion methods in terms of visual quality and objective evaluation, achieving the highest average values in six objective evaluation metrics.

Highlights

  • Since richer, more comprehensive scene information cannot be obtained using a single image sensor, which leads to certain limitations, multiple sensors tend to be used to capture images for image fusion

  • The contribution of the detail-enhanced layers to our method is first verified, the fusion performance is compared between our proposed method and other state-of-the-art methods, including (1) convolutional sparse representation fusion (ConvSR) [29]; (2) gradient transfer fusion (GTF) [30]; (3) a fusion method based on weighted least squares (WLS) optimization [31]; (4) a fusion method that uses infrared feature extraction and visual information preservation (FEIP) [32]; (5) a fusion method based on multilevel Gaussian curvature filtering (GCF) [33]; (6) a fusion framework based on ResNet50 and zero-phase component analysis (ResNet50) [12]; (7) Bayesian fusion (Bayesian) [34]; and (8) a fusion framework based on latent low-rank representation (LatLRR) of multilevel image decomposition (MDLatLRR) [19]

  • In this paper, an infrared and visible image fusion method based on LatLRR nested with rolling guided image filtering (RGIF) is proposed

Read more

Summary

Introduction

More comprehensive scene information cannot be obtained using a single image sensor, which leads to certain limitations, multiple sensors tend to be used to capture images for image fusion. An algorithm is employed to extract and integrate the effective information from multiple images captured from the same scene at the same moment for multidirectional, multiangle fusion to obtain good visual effects and rich detailed information. Visible images have high spatial resolution and rich background information, and they are suitable for human visual perception. They are susceptible to poor lighting, smoke, and adverse weather conditions. Infrared images, depending on the detectors, are able to perceive thermal radiation at different wavelengths and have strong night vision and fog penetration capabilities; they suffer from low VOLUME XX, 2021

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call