Abstract

With the rapid development of deep learning in recent years, the level of automatic driving perception has also increased substantially. However, automatic driving perception under adverse conditions, such as fog, remains a significant obstacle. The existing fog-oriented detection algorithms are unable to simultaneously address the detection accuracy and detection speed. Based on improved YOLOv5, this work provides a multi-object detection network for fog driving scenes. We construct a synthetic fog dataset by using the dataset of a virtual scene and the depth information of the image. Second, we present a detection network for driving in fog based on improved YOLOv5. The ResNeXt model, which has been modified by structural re-parameterization, serves as the model’s backbone. We build a new feature enhancement module (FEM) in response to the lack of features in fog scene images, and use the attention mechanism to help the detection network pay more attention to the more useful features in the fog scenes. The test results show that the proposed fog multi-target detection network outperforms the original YOLOv5 in terms of detection accuracy and speed. The accuracy on the RTTS public dataset is 77.8%, and the detection speed is 31 fps, which is 14 frames faster as compared to the original YOLOv5.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call