<div><p>Blurred regions in images can hinder visual analysis and have a notable impact on applications such as navigation systems and virtual tours. Many existing approaches in the literature assume the presence of blurred regions in an image and process the entire image, even when no blurred regions are actually present. This approach leads to unnecessary computational overhead, resulting in inefficiency and resource consumption. In this paper, we introduce a Street-view images Blur Detection Network (SBDNet), consisting of two interconnected subnetworks: the Classifier network and the Identifier network. The Classifier network is responsible for categorizing street-view images as either blurred or not blurred. Once the Classifier network determines that an image is blurred, the Identifier network is then activated to estimate the specific areas that are blurred within the image. High-level semantic features from the Classifier network are used to construct the blur map estimation in the Identifier network, when necessary. The algorithm was trained and evaluated using the Street-View Blur Images dataset (SVBI) and three publicly available blur detection datasets: CUHK, DUT, SZU-BD. Our quantitative and qualitative results demonstrate that SBDNet competes with state of the arts in blur map estimation.</p></div>
Read full abstract