Abstract

Defocus blur detection aiming at distinguishing out-of-focus blur and sharpness has attracted considerable attention in computer vision. The present blur detectors suffer from scale ambiguity, which results in blur boundaries and low accuracy in blur detection. In this paper, we propose a defocus blur detector to address these problems by integrating multiscale deep features with Conv-LSTM. There are two strategies to extract multiscale features. The first one is extracting features from images with different sizes. The second one is extracting features from multiple convolutional layers by a single image. Our method employs both strategies, i.e., extracting multiscale convolutional features from same image with different sizes. The features extracted from different sized images at the corresponding convolutional layers are fused to generate more robust representations. We use Conv-LSTMs to integrate the fused features gradually from top-to-bottom layers, and to generate multiscale blur estimations. The experiments on CUHK and DUT datasets demonstrate that our method is superior to the state-of-the-art blur detectors.

Highlights

  • Defocus blur or out-of-focus blur is a common problem when shooting photos

  • A defocus blur detector that integrates multiscale deep features with Conv-LSTM [32] is proposed in this paper to generate robust blur detection

  • We partition it into multiscale feature extraction sub-network (MsFEN) and multiscale blur estimation subnetwork (MsBEN)

Read more

Summary

INTRODUCTION

Defocus blur or out-of-focus blur is a common problem when shooting photos. Defocus blur is mainly due to the distance of the object from the focal plane. A defocus blur detector that integrates multiscale deep features with Conv-LSTM [32] is proposed in this paper to generate robust blur detection. We adopt VGG16 [33] as the basic feature extractor to extract multiscale deep convolutional features from images with different sizes to solve the scale problem and to generate robust blur detection. The features extracted from different sized images at the corresponding convolutional layer are fused to generate more robust representation These fused features are integrated by Conv-LSTMs from top-to-bottom layers to estimate multiscale blur maps. We can find that the features extracted from three scaled images have different spatial resolutions in the corresponding convolutional blocks These multi-scale features are useful for defocus blur detection. We use a convolutional layer with the filter of 1 × 1×1 on the hidden state of last time step to get the estimated blur map

MULTI-LAYER LOSSES
EVALUATION CRITERIA
EXPERIMENTAL RESULTS ANALYSIS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.