Abstract

The rapid development and application of AI in intelligent transportation systems has widely impacted daily life. The application of an intelligent visual aid for traffic sign information recognition can provide assistance and even control vehicles to ensure safe driving. The field of autonomous driving is booming, and great progress has been made. Many traffic sign recognition algorithms based on convolutional neural networks (CNNs) have been proposed because of the fast execution and high recognition rate of CNNs. However, this work addresses a challenging question in the autonomous driving field: how can traffic signs be recognized in real time and accurately? The proposed method designs an improved VGG convolutional neural network and has significantly superior performance compared with existing schemes. First, some redundant convolutional layers are removed efficiently from the VGG-16 network, and the number of parameters is greatly reduced to further optimize the overall architecture and accelerate calculation. Furthermore, the BN (batch normalization) layer and GAP (global average pooling) layer are added to the network to improve the accuracy without increasing the number of parameters. The proposed method needs only 1.15 M when using the improved VGG-16 network. Finally, extensive experiments on the German Traffic Sign Recognition Benchmark (GTSRB) Dataset are performed to evaluate our proposed scheme. Compared with traditional methods, our scheme significantly improves recognition accuracy while maintaining good real-time performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.