Abstract

Deep learning technology has promoted the performance of defocus blur detection. However, blur detectors suffer from background clutter, scale ambiguity and blurred boundaries of the defocus blur regions. To conquer these issues, previous methods propose to use multi-scale image patches or images for blur detection, which costs much computation time. In this paper, we propose a deep neural network that takes a single-scale image as input to generate robust defocus blur detection. Specifically, we first extract multi-scale convolutional features by a feature extraction network. And then we resize the convolutional features of each layer by a fixed ratio to approximate convolutional features that extracted from a resized image with the same ratio. By approximation, it not only generates features extracted from a scaled image but also reduces the computation of feature extraction from multi-scale images. We concatenate the features extracted from the original image with the approximated features at the corresponding layers by convolutional layers to increase the blur distinguish ability. We gradually fuse the convolutional features from top-to-bottom by Conv-LSTMs to refine the blur predictions. We compare our method with nine state-of-the-art defocus blur detectors on two defocus blur detection benchmark datasets. Experiment results demonstrate the effectiveness of our proposed defocus blur detector.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call