Abstract
Defocus blur detection (DBD) aims to detect and locate defocus features in visual scenes. While available fully supervised DBD methods improve detection accuracy, they rely on large-scale handcrafted pixel-level labels and single-mode images, which makes them expensive and error-prone. In this work, we explore the use of depth information in a semi-supervised DBD method for the first time. Different from previous approaches, we generate a pair of reversible defocused homogeneous regions in weakly supervised mutual guidance networks (MGMs) to provide weak semantic guidance for this task. In a strongly supervised mutual attention network, depth information and RGB features are used to learn the defocus blur homogeneous region from the ground truth (GT). Meanwhile, depth information is extracted using a depth estimation network to guide the defocus location and provide a strong prior for the weakly supervised part. The experimental results show that our network outperform the available fully supervised methods in DBD, and provides new inspiration for research on semi-supervised robust and RGB-D multi-modal DBD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.