Abstract

Image fusion integrates complex information about a target scene from multiple sensors into a single image. The fused image can further be utilized for human perception or different machine vision tasks. In the case of infrared and visible images, infrared images have the advantage of capturing thermal radiation intensity, whereas visible images are superior in gradient texture. In order to effectively fuse thermal intensity of infrared image and texture advantage of visible image, we propose a novel fusion method based on L0 decomposition and intensity mask. The proposed method first acquires base and detail layers of images (visible & infrared) using L0 decomposition. Next, an intensity mask is obtained using the basic global thresholding method on base layers of infrared image. The layers (base layers and detail layers) and visible images are divided images into three parts by the use of intensity mask, namely, mask-base layers, mask-detail layers, and texture-background. The first and second parts effectively achieve intensity blending, whereas the third part achieves the fused image with a clear gradient texture. The proposed method shows superior performance when compared with five state-of-the-art methods (on publicly available databases).

Highlights

  • In the current era of information technology, humans are more inclined towards gathering information through images i.e., obtain, express, and transmit information

  • In the case of infrared and visible images, infrared images have the advantage of capturing thermal radiation intensity, whereas visible images are superior in gradient texture

  • In order to effectively fuse thermal intensity of infrared image and texture advantage of visible image, we propose a novel fusion method based on L0 decomposition and intensity mask

Read more

Summary

Introduction

In the current era of information technology, humans are more inclined towards gathering information through images i.e., obtain, express, and transmit information. The application of CNN (convolutional neural network) and GAN (generative adversarial network) achieves better fusion performance on images of different modes [18]–[21] These methods extract the saliency information of the source image by transforming and reconstructing the whole image in different domains (sparse representation, pyramid decomposition and neural network decomposition), and fuse the superior information contained in the source image. The salient region of infrared image and the background region of visible image are separated by the proposed method and fused using different fusion strategies. This method achieves effective fusion of infrared image intensity and visible image texture.

Intensity Mask Fusion Strategy
Image Decomposition via L0 Gradient Minimization
Image Processing Using a Mask
Visual Saliency Map for Mask-Base Fusion
Maximum Gradient Fusion for Mask-Detail
Image Reconstruction
Fusion Method Experiment
Comparison and Analysis
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call