Abstract

Multifocus image fusion is the merging of images of the same scene and having multiple different foci into one all-focus image. Most existing fusion algorithms extract high-frequency information by designing local filters and then adopt different fusion rules to obtain the fused images. In this paper, a wavelet is used for multiscale decomposition of the source and fusion images to obtain high-frequency and low-frequency images. To obtain clearer and complete fusion images, this paper uses a deep convolutional neural network to learn the direct mapping between the high-frequency and low-frequency images of the source and fusion images. In this paper, high-frequency and low-frequency images are used to train two convolutional networks to encode the high-frequency and low-frequency images of the source and fusion images. The experimental results show that the method proposed in this paper can obtain a satisfactory fusion image, which is superior to that obtained by some advanced image fusion algorithms in terms of both visual and objective evaluations.

Highlights

  • Because sensor imaging technology is affected by its imaging mode, the imaging environment, and other factors, the generated image display of the target object is one-sided and superficial; reorganizing such information can describe the object more comprehensively and in more detail

  • Image fusion refers to the process of fusion of multiple images of the same scene into one image according to the corresponding fusion rules [1, 2]. e resulting image is more comprehensive than that obtained using the information expressed by a single source image; the resulting image exhibits clearer vision and is more consistent with human eye and machine perception [3, 4]. erefore, the realization of multifocus image fusion is of practical significance

  • Objective evaluation plays a vital role in image fusion. e quality of image fusion needs to be evaluated by quantitative scores of multiple indicators

Read more

Summary

Introduction

Because sensor imaging technology is affected by its imaging mode, the imaging environment, and other factors, the generated image display of the target object is one-sided and superficial; reorganizing such information can describe the object more comprehensively and in more detail. Image artefacts are usually present in the fusion results obtained by these classical spatial methods In response to this problem, scholars have proposed a variety of image fusion algorithms based on spatial transformation, such as image matting [7], wizard filtering fusion [8], multiscale weighted gradient [9], and quad-tree and weighted focal length measurement [10]; these algorithms can extract the details of the original image and maintain the spatial consistency of the fusion image. The deep convolutional neural network (CNN) is used to train the clear image and its corresponding blurred image to encode the mapping. A convolutional neural network is used to learn the direct mapping between the high-frequency and low-frequency subbands of the source and fusion images, respectively [11], and obtain the fusion rules of the lowfrequency and high-frequency subbands.

Related Work
Multifocus Deep Convolutional Neural Network
High-Frequency Subband Network
Low-Frequency Subband Network
Experiment and Analysis
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.