Abstract

Convolutional neural networks (CNNs) have made remarkable progress in detecting vehicle objects in normal weather conditions. However, the performance of these networks deteriorates when faced with rain and fog, as these conditions degrade image quality and cause blurring. The network models trained on clear images perform poorly on rainy and foggy images due to the differences in distribution between normal weather and adverse weather conditions, leading to domain bias. To address this challenge, we present a novel algorithm called DAGL-Faster (Domain Adaptive Global-Local Alignment Faster RCNN) , which enables domain-adaptive vehicle object detection specifically for rainy and foggy weather. DAGL-Faster extends the Faster RCNN framework by incorporating three domain classifiers. These classifiers aid the network in extracting features that are invariant to the domain differences between the source domain (normal weather) and the target domains (rain or fog). The algorithm tackles the domain dissimilarities from three perspectives: local image-level, global image-level, and instance-level. Additionally, it introduces consistency regularization to facilitate simultaneous alignment at the image-level and instance-level, optimizing the overall alignment effect. Through extensive experiments, we demonstrate the efficacy of DAGL-Faster on two benchmark datasets: Foggy Cityscapes and Rain Vehicle Color-24. The algorithm achieves an impressive mean average precision (mAP) of up to 36.7% on the Foggy Cityscapes dataset and 49.79% on the Rain Vehicle Color-24 dataset. Moreover, DAGL-Faster demonstrates superior computational efficiency, with a processing time of 1.9 seconds per image using a single GTX 1080 Ti GPU. These results surpass state-of-the-art algorithms for popular domain adaptive object detection methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call