Abstract

In recent years, the neural network (NN) has shown its great potential in image recognition tasks of autonomous driving systems, such as traffic sign recognition, pedestrian detection, etc. However, theoretically well-trained NNs usually fail their performance when facing real-world scenarios. For example, adverse real-world conditions, e.g., bad weather and lighting conditions, can introduce different physical variations and cause considerable accuracy degradation. As for now, the generalization capability of NNs is still one of the most critical challenges for the autonomous driving system. To facilitate the robust image recognition tasks, in this work, we build the RobuTS dataset: a comprehensive Robust Traffic Sign Recognition dataset, which includes images with different environmental variations, e.g., rain, fog, darkening, and blurring. Then to enhance the NN's generalization capability, we propose two generalization-enhanced training schemes: 1) REIN for robust training without data in adverse scenarios and 2) Self-Teaching (ST) for robust training with unlabeled adverse data. The great advantages of such two training schemes are they are data-free (REIN) and label-free (ST), thus effectively reducing the huge human efforts/cost of on-road driving data collection, as well as the expensive manual data annotation. We conduct extensive experiments to validate our methods' performance on both classification and detection tasks. For classification tasks, our proposed training algorithms could consistently improve model performance by +15%-25% (REIN) and +16%-30% (ST) in all adverse scenarios of our RobuTS datasets. For detection tasks, our ST could also improve the detector's performance by +10.1 mean average precision (mAP) on Foggy-Cityscapes, outperforming previous state-of-the-art works by +2.2 mAP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call