Abstract

Current mainstream deep learning methods for object detection are generally trained on high-quality datasets, which might have inferior performances under bad weather conditions. In the paper, a joint semantic deep learning algorithm is proposed to address object detection under foggy road conditions, which is constructed by embedding three attention modules and a 4-layer UNet multi-scale decoding module in the feature extraction module of the backbone network Faster RCNN. The algorithm differs from other object detection methods in that it is designed to solve low- and high-level joint tasks, including dehazing and object detection through end-to-end training. Furthermore, the location of the fog is learned by these attention modules to assist image recovery, the image quality is recovered by UNet decoding module for dehazing, and then the feature representations of the original image and the recovered image are fused and fed into the FPN (Feature Pyramid Network) module to achieve joint semantic learning. The joint semantic features are leveraged to push the subsequent network modules ability, and therefore make the proposed algorithm work better for the object detection task under foggy conditions in the real world. Moreover, this method and Faster RCNN have the same testing time due to the weight sharing in the feature extraction module. Extensive experiments confirm that the average accuracy of our algorithm outperforms the typical object detection algorithms and the state-of-the-art joint low- and high-level tasks algorithms for the object detection of seven kinds of objects on road traffics under normal weather or foggy conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call