Abstract
End-to-end automatic driving requires the identification of lane curvature. We proposed a lightweight detection method for the low-light lane curvature based on the Fractional-Order Fusion Model (FFM) to assure real-time performance and increase the reliability of automatic driving in low-light conditions. To begin, the FFM method is introduced to enhance images with low average brightness, fuzzy detail, and a high signal-to-noise ratio. Under low-light conditions, these images cannot clearly express information (such as rainy, snowy, foggy, and other harsh external environments). Then, aiming at the problems of complex network structure, high hardware configuration required for training, and low transmission Frames Per Second (FPS) of real-time detection in the previously proposed YOLOv5, the SETR-C3Block module is proposed. The YOLOv5n is improved by optimizing the configuration of the target detector head and the network’s structure, which solves the problems of low efficiency and redundancy parameters in feature extraction in the network. According to the experimental results on the lane curvature dataset, the mAP@.5:.95 of SETR-YOLOv5n is 87.22%, the transmission FPS of real-time detection is 70.4, and the number of model parameters is only 1.8M. It shows that the SETR-YOLOv5n can meet the lightweight and accuracy requirements of target detection by the mobile terminal or embedded device.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.