Abstract

Multi-focus image fusion is a technique for obtaining an all-in-focus image in which all objects are in focus to extend the limited depth of field (DoF) of an imaging system. Different from traditional RGB-based methods, this paper presents a new multi-focus image fusion method assisted by depth sensing. In this work, a depth sensor is used together with a colour camera to capture images of a scene. A graph-based segmentation algorithm is used to segment the depth map from the depth sensor, and the segmented regions are used to guide a focus algorithm to locate in-focus image blocks from among multi-focus source images to construct the reference all-in-focus image. Five test scenes and six evaluation metrics were used to compare the proposed method and representative state-of-the-art algorithms. Experimental results quantitatively demonstrate that this method outperforms existing methods in both speed and quality (in terms of comprehensive fusion metrics). The generated images can potentially be used as reference all-in-focus images.

Highlights

  • The depth of field (DoF) of an imaging system is limited

  • The point spread function (PSF) of a partially-focused image is non-uniform because the farther an object is from the DoF of an imaging system, the larger is the extent of blurriness of the object in an image

  • The results reveal that our proposed method, deep convolutional neural network (DCNN), dense SIFT (DSIFT), and image matting (IM) outperformed guided filtering (GF), nonsubsampled contourlet transform (NSCT)-PCNN, discrete wavelet transform (DWT), and NSCT, and GF performed better than other multi-scale transform methods (NSCT-PCNN, DWT, and NSCT)

Read more

Summary

Introduction

The depth of field (DoF) of an imaging system is limited. With a fixed focus setting, only objects in a particular depth range appear focused in the captured source image, whereas objects in other depth ranges are defocused and blurred. The PSF-based methods obtain an all-in-focus image by estimating the PSF of the imaging system and restoring an all-in-focus image based on the estimated PSF. Deconvolution methods first estimate the PSF and deconvolve with this PSF to restore an all-in-focus image. The PSF of a partially-focused image is non-uniform because the farther an object is from the DoF of an imaging system, the larger is the extent of blurriness of the object in an image. One type of deconvolution method directly estimates the non-uniform PSF of an imaging system using specially-designed cameras [5] or a camera with a specially-designed lattice-focal lens [6]. Instead of estimating the non-uniform PSF, the other type of deconvolution method first constructs an image with uniform blur and estimates a uniform PSF

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.