Abstract

A self-driving car is a hot research topic in the field of the intelligent transportation system, which can greatly alleviate traffic jams and improve travel efficiency. Scene classification is one of the key technologies of self-driving cars, which can provide the basis for decision-making in self-driving cars. In recent years, deep learning-based solutions have achieved good results in the problem of scene classification. However, some problems should be further studied in the scene classification methods, such as how to deal with the similarities among different categories and the differences among the same category. To deal with these problems, an improved deep network-based scene classification method is proposed in this article. In the proposed method, an improved faster region with convolutional neural network features (RCNN) network is used to extract the features of representative objects in the scene to obtain local features, where a new residual attention block is added to the Faster RCNN network to highlight local semantics related to driving scenarios. In addition, an improved Inception module is used to extract global features, where a mixed Leaky ReLU and ELU function is presented, to reduce the possible redundancy of the convolution kernel and enhance the robustness. Then, the local features and the global features are fused to realize the scene classification. Finally, a private dataset is built from the public datasets for the specialized application of scene classification in the self-driving field, and the proposed method is tested on the proposed dataset. The experimental results show that the accuracy of the proposed method can reach 94.76%, which is higher than the state-of-the-art methods.

Highlights

  • With the acceleration of urbanization and the rapid development of the social economy, the number of cars continues to increase, and the transportation situation becomes more and more complex

  • The main reason for using the Faster RCNN is that the performances of the Faster RCNN series are significantly better than other networks

  • Some additional comparison experiments are conducted to discuss the performance of the key parts of the proposed network, including the local feature network based on Faster RCNN and the global feature network based on Inception V1

Read more

Summary

INTRODUCTION

With the acceleration of urbanization and the rapid development of the social economy, the number of cars continues to increase, and the transportation situation becomes more and more complex. The scene category can not be decided by the representative objects existing in the scene Based on this idea, in the proposed method, the local features of the representative objects in the scene and the global features of the whole scene image are extracted and fused to realize scene classification accurately. The main contributions of this paper are as follows: (1) An improved deep network model is proposed for scene classification of self-driving cars. To deal with these problems of the scene classification for self-driving cars, an improved deep learning-based method is proposed.

Improved Faster RCNN network for local feature extraction
Improved Inception V1 network for global feature extraction
Feature fusion and classification network
Data sets
Experimental results of the proposed method
Comparison experiments
DISCUSSIONS
About the global feature extraction network
About the special data set
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.